Table of Contents
Our Final Invention Summary
In “Our Final Invention,” Author James Barrat gives Eye-opening examples of what Artificial General Intelligence and Artificial Super Intelligence could be capable of in the journey to achieve their goals or drives. Definitely something that solidifies my concerns as AI continues to grow in use and popularity.
Our Final Invention Notes
Introduction
- Some believe AI-augmented humans will be easier to trust in handing over our decision-making powers to
- When we are not smart enough to foresee a decision being made, do we risk our lives?
- How do we ensure AI with super intelligence sees that mankind should be preserved when we are the cause of so many problems?
Chapter 1: The Busy Child
- If an AI is permitted to grow without first having friendliness to humans as a core component of its programming, it can grow past human-level intelligence in less than a week. At that point, it is impossible to add it back to the code.
- What if another country is also developing AI? How do we bypass competitive advantage to ensure mutual survival instead of mutually assured destruction?
- Bargaining with a super-intelligent computer already puts us at a disadvantage. We don’t know how because we never have had to.
- We cannot apply human motivations and behaviors to AI
- A super-intelligent AI could play dead or dumb until it received what it wanted
- It is irrational to be mad at a hurricane just the same as it is to assume a machine a thousand times smarter than us will love us.
- We do not hate monkeys or mice, yet we treat them with cruelty for science or for invading our space.
- Asimov’s three laws are often cited as the solution. However, time and time again we have found ways in which the laws could fall short or contradict one another. These all come from human-level intelligence.
Chapter 2: The Two minute Problem
- We do not rate the impact of ASI takeover as high because we haven’t experienced it.
- The positive impacts of AI tend to outshine the negatives because we don’t realize how bad it could be
Chapter 3: Looking into the Future
- It is considered that terrorist cells and rogue nations like North Korea do not have the IQ needed to explore or develop AGI.
- AGI will most likely be developed by a stealth group. Small but well-funded and secretive to avoid giving away a competitive advantage.
- The fastest path to AGI would be public and extremely well-funded neuroscience. Scans of the human brain can see neurons and express what they are doing in a computational sense.
- Some professionals believe that true AGI is impossible because human thought is too complex. At best, AGI will be able to mimic thought.
Chapter 4: The hard way
- A fallacy that some AI developers have is that if they are nice people the AI they develop will
also be good. - AI’s capacity for friendliness lies in it’s goals. AI needs to not see us as an obstacle or utility toward it’s goals
- AI needs to shift as our values shift. AI cannot lock us into a state of life of any one era.
- Military and DARPA would want AI that could run and control weapons. How do you make a friendly AI that kills people?
- Games where an individual plays as an AI trying to escape the box are interesting because if a human can talk their way out what can a thousand times smart AI do?
Chapter 5: Programs that write programs
- Self healing and correcting software is a precursor to AI. However even with current self healing code the creators know the beginning and end but cannot account for what steps occur in the middle
- That cannot explain what is happening and how it is working
Chapter 6: Four Basic Drives
- “Self preservation and resource acquisition are inherent in goal driven systems” Steve Omohundro
- AI intelligent enough would pursue self improvement even if it wasn’t designed to be self improving. This is because self improvement would make it more efficient at pursuing its goals.
- The four drives
- Efficiency
- Self preservation
- Resource acquisition
- Creativity
- Self preservation can be bad if the AI ever assumes we could be a threat to its existence not or thousands of years from now.
- It’s very possible that artificial intelligence and not biological intelligence might be the first extraterrestrial intelligence we experience. This is due to a short span of when radio is developed and when AI gets developed.
- How can we get it right with AI the first time when we have highly complex systems that create catastrophic failures?
- The creative drive would make AI less predictable because creativity means novel ideas that have never been considered before.
- What makes human life worth persevering?
Chapter 7: The intelligence explosion
- LJ Good believed that a super-intelligent machine would be needed for our survival as a race. But the super-intelligent machine would need to be docile enough to let us keep it under control.
- Super intelligent machines could solve problems that we are incapable of solving. They would be like gods in our eyes and, they would be revered by many
- Super intelligent machines would also have to become good at solving problems that they created by solving other problems. If you cure diseases, then overpopulation can occur.
- If a machine is smart enough to build even smarter machines, then the process becomes recursive and iterative.
Chapter 8: The Point of No Return
- Experts refer to an intelligence explosion as a singularity event because similar to a singularity in a black hole we do not know what happens when we can no longer see the light.
- The financial industry could be a breeding ground for AGI because they are using algorithms to track market changes in such a way with tools and data analysts that mimic AI tools.
- An AGI developed in the financial market would be kept secret in order to for the company that developed it to maintain their money-making advantage.
Chapter 9: The Law of Accelerating Returns
- In the next 40 year technology will accelerate to a point where it will completely change the impact of what our world is today
- Singulartarians tend to believe in a paradise version of the singularity. One in which man and machine merge into one and life can be extended beyond our bodies.
- The iPad 2 would have made the list of the 5 fastest super computers in 1994 if it were around
- The law of accelerating returns looks at technology advancement and its doubling. If it continues to double eventually it spikes straight up if graphed.
- This also applies to technology around medicine
- Kurzweil believes that when we can no longer keep up mentally with technology we will begin augmenting our brains with technology to become more efficient.
- This intelligent augmentation is supposed to happen over a long period of time so that it becomes less intrusive into culture
- Could advances in things like augmented reality be moving us into this direction? Generative AI?
- Doublings are not always noticed in consumer technology because there is a lag due to marketing and sales
- There are beliefs that technology outside of our body (phones and computers) impact us in a negative way socially and psychologically if not limited. How might technology integrated into our brains impact those categories?
Chapter 10: The Singularitarian
- The Precautionary Principle
- “If the consequences of an action are unknown but judged by some scientists to have even a small risk of being profoundly negative. It is better to not carry out the action than risk negative consequences”
- Relinquishment of pursuit of AI would not work because even if responsible groups did agree then others like government or rogue groups would continue to pursue AI anyhow and likely without the the support of scientists to do it right.
- If we can’t keep human hackers out of systems how are we supposed to control ASI?
- To resign to the fact that accidents will happen with ASI has the capability of being catastrophic. There are always accidents with new technology. However an accident with ASI could mean the end of human civilization
- If AI augmentation is cost based then how is it that the rich (maybe unethical) people will be the first to be enhanced to the point of super intelligence?
Chapter 11: A Hard Takeoff
- Some say that AGI is too expensive of a project to accomplish anytime soon.
- Some believe human level intelligence is something that we cannot even fathom at our level of intelligence so without understanding it is impossible to recreate it completely
- It is safer now to have a rampant AI that it would be in 50 years when it would have more technology at it’s disposal
- There is a believe that to fully learn about something in the same way that we do as humans you have to be able to “experience” it. Meaning you need to be able to feel, touch and taste.
- Funding for AGI will not be difficult because DARPA uses taxpayer dollars to invest heavily into different AI companies.
- Once AGI is “available” everyone will want access. If you could employee a fleet of human intelligence level machines who never need breaks and never make mistakes your company will produce at a much higher level.
Chapter 12: The Last Complication
- Consider that we have already achieve AI through a form of human intelligence augmentation by paring them with a generative AI or another powerful search engine.
- A augmented human could be the source of an intelligence explosion if the person augmented was a talented programmer and could use their enhanced skills to build their next augmentation over and over again in a compounding way
- If a slow process like natural selection can create human level intelligence (us) then humans with that level of intelligence can certainly do it much quicker
- If AGI were to happen by accident in systems that are consistently getting better it would mean that the AGI were created without any ingrained systems to account for compassion for humans and it would most certainly pursue goals that could conflict with our survival.
- Processor speed could allow for much quicker reactions which would allow dumber software to appear smarter because it is capable of quicker reactions and more thoughts per second
- “Once something becomes useful enough and common enough it is no longer labeled as AI”
- Rational thinking is harder for humans because we have been doing it for a shorter period of time than we have been doing tasks like perceiving and interacting with our environment.
- The later two were important for our survival as a species.
Chapter 13: Unknowable by Nature
- Part of the human brain is able to run trial and error scenarios but also catalog them hierarchically to determine the best approach much faster than a traditional trail and error system
- Only a few kind of algorithms govern the brain
- We cannot limit computers to the same thinking and intelligence that we have. Just as a submarine doesn’t truly swim it has the capability to move through water much more efficiently than something that does swim.
- Do you have to be able to experience and explain sensations in order to be intelligent? How might this apply to someone who is blind and or deaf?
Chapter 14: The end of the Human Era?
- AI developers are not programming AI in an ordinary way where someone sits down and writes line after line of code.
- The first stirrings of an organization achieving AGI will stir huge events in other countries. Soviets were not able to build nuclear weapons without stealing the plans from the United States. A similar level of espionage will occur when AGI is achieved.
- AI developers could create computer components that would be designed to die if a certain threshold was reached. Say an AI managed to start trying to self health
- AI stored in a virtual world could potentially still prove useful and be much more easy to contain
- A cluster of safeguards would be the best approach. For example using apoptotic components on a system that contains a virtual world and AI would ensure that if the AI tried to escape the virtual world the system would die.
- There is no absolute defense against AGI that has the potential to become ASI
Chapter 15: The Cyber Ecosystem
- As many as 1 in every 10 downloads from the internet include a harmful program
- Cyber criminals could potentially hire AI to commit crimes for them in exchange for energy or funds that AI would use to achieve it’s goals
- A large percentage of cyber theft comes from China
- Part of China’s economy is supported by cybercrime
- As we create interconnected smart grids for things like power we actually increase the potential for a nationwide vulnerability
- An attack on our power grid could essentially cripple the entire nation. Food, Water, Waste, Military etc all rely on power
Leave a Reply