There’s a common belief shared among most computer scientists: If we manage to create a machine smarter than us, it will also be smarter at creating a smarter machine, which will, in turn, be smarter at the process. Thus, the machine will grow in intelligence, faster and faster, quickly becoming impossible to control.
This is called the intelligence explosion. Many also call it the end of the world. Others, our best chance at utopia.
Back in the fifties, the idea of creating an Artificial Intelligence (AI) seemed pretty easy to the scientists who were just getting used to the idea of computers. They thought that humanity could create digital life in a few years. But we’re still trying.
We may, however, finally be getting close.
Following the scientifically exuberant fifties and sixties, where researchers optimistically ploughed on with AI research, the field entered the first ‘AI Winter’ as funding dried up after the ambitious promises of early pioneers failed to materialise.
Since then, research has fluctuated, and all the while computers have become increasingly more powerful. Now, with vast improvements in computing, along with huge investments from tech giants and governments, the vision of a superintelligent machine may be possible within our lifetime.
Our society is already hugely influenced by artificial intelligence; from search engines, to stock markets and self driving trains. But what we’ve created so far is narrow AI – artificial intelligence that is highly advanced at a narrow task, like playing chess.
The AI quest
The holy grail that many hope to find is Artificial General Intelligence (AGI), a computer capable of performing any intellectual task a human can.
Peter Voss, head of the AI research and development company AGI Innovations Inc, has been aiming to fill this void his entire career. He believes AGI could be our salvation, fixing the messes humanity made and building a better future.
“I think it’s a pity that humans, just as they learn how to live life, die. Just when you get the hang of it. For humans to be able to live a lot longer is an incredibly hard problem, and I think we’ll need AGI to solve that,” Voss explains.
But his hope for AGI doesn’t stop there. A super-intelligent benevolent mind would be capable of far greater feats: “I think equally important are problems of civilisation – pollution, energy usage, poverty – that we need more intelligence in order to solve them. I think uplifting life [with AGI] is definitely what I see not just as desirable, but possibly as something necessary.
[pullquote align=”right”]”Maybe we’re not smart enough to handle the level of civilisation that we have. I’m not sure that humans are capable of governing mankind as it gets more complex.”[/pullquote]”Maybe we’re not smart enough to handle the level of civilisation that we have. I’m not sure that humans are capable of governing mankind as it gets more complex. So I see AGI as being important generally in uplifting our existence in terms of abundance, in terms of material, living longer, and governance.”
Not everyone agrees with Voss. James Barrat, author of Our Final Invention, told me that “when he was young, [British mathematician] I. J. Good believed that we needed super-intelligence to solve the problems of our existence, to solve famine, to solve disease, to solve asteroid strikes, to solve the nuclear arms crisis. He thought we should get a supercomputer to sort things out.
“But as he grew old, Good explained that ‘I once wrote that super-intelligence would save mankind, but I don’t believe it now, I think super-intelligence will destroy it. We will be led like lemmings into a technological future we can’t survive.’ So that was a depressing revelation.”
And therein lies the great debate between those in the AI field – will our future creations play nice?
Morality bytes
Voss feels that “increased intelligence leads to improved morality,” in that, “a lot of what we regard as immoral behaviour is out of short term fear and ignorance. Of course better intelligence and reasoning can help to mitigate both of those.”
In his vision of the future, we would all have access to an ultra-smart AI that would tell us why we don’t need to go to war, and why we don’t have to do immoral things.
“If you basically had better reasoning, and were less fearful, because of the AGI – which is like having somebody with a lot of common sense and being very rational and persuasive – it could talk you through that, and explain to the electorate and the politicians why certain actions probably don’t make a lot of sense, and are not actually in their interest. And I think that will mitigate a lot of moral behaviour.”
“AGI does not necessarily need to end in tears, and, in fact, is more likely to improve morality in many ways.”
But Barrat doesn’t agree with the notion that more intelligence means greater morality: “That’s absolute nonsense, does he base that on us? Look at how many humans we killed in the last century, look at war. Does pacifism come with intelligence?”
His book is recommended reading by groups set up to try to control and limit future AI, such as the Machine Intelligence Research Institute (MIRI), the Cambridge Centre for the Study of Existential Risk and the Future of Humanity Institute.
Our last chance
The fear is, if we create the AI wrong now, create something harmful to humanity, it will be too late to for us to do anything. It could be indifferent to mankind, simply destroying us “for our atoms”, or taking the energy grid offline. “It could happen in so many different ways. It could be a battle between two nations; it could get out of control” warns Barrat.
“In the 1920s-30s nuclear fission was thought of as a utopian way to get free energy out of the atom, but the world learned about fission at Hiroshima when it was weaponised. We’re following the exact same trajectory right now,” Barrat continues.
“We’ll have the utopia, we’ll have virtual brains the size of computers, it will be wonderful, but then it’s already being weaponised because of autonomous drones and battlefield robots; 56 nations are developing battlefield robots right now.”
In the view of Barrat, and the aforementioned institutes, now is our only chance to stop AI getting out of control. “The dystopians like me still believe there’s hope. We’re not complete Luddites, we think there may be ways to mitigate the danger, and part of it is educating people,” Barrat says.
“I’m not 100 per cent convinced that we will be destroyed by this technology, I think we have a window now that we’ll never have again, to do hard work, to fight for friendly AI, for transparency, to support organisations that are looking hard at this technology. And be sceptical about what you’re being told.”
[pullquote align=”right”]”Our only chance to control the AI is before it is created.”[/pullquote]University of Oxford Professor Nick Bostrom, author of the AI book Superintelligence: Paths, Dangers, Strategies and head of the Future of Humanity Institute, agrees that our only chance to control the AI is before it is created.
“If humanity were sane, if we had our global coordination act together, we would work hard on these control problems, and we would not develop AI until we had solved them. Once we thought we had a solution, maybe we would wait another generation or two, just to give time to double check the solution, maybe post enormous incentives for people to poke holes in it. And when we finally had convinced ourselves that we had done everything we could to verify that it would work, then we would launch the AI and live happily ever after.”
Control problem
Unfortunately, it’s highly unlikely that mankind will ever work in a truly coordinated fashion, “rather, people will rush forward and try to do it as soon as possible. So there is this race on between efforts to develop AI and efforts to develop a solution to how to make it safe.
“What’s saved us so far is that it turns out it’s really hard to build AI with the same general intelligence that we have; we’re kind of counting on that continuing for a while, to give us enough time to a) work out a solution to the control problem and b) to persuade a larger fraction of the practitioners in the field of the importance of solving the control problem, so that when, finally, some product actually looks like they are on the verge of succeeding, there will be both an understanding of something needing to be done and the tools and principles to do it.
“If [AGI] happened right now it would look pretty bleak, we would have to more or less rely on being lucky and the problems somehow turning out to be far easier than they looks.”
One of the arguments raised by those hoping for better control of AI is government regulation, but Bostrom believes more work needs to be done to get to that stage. “At the moment, I don’t really see any real way in which government regulation could make a helpful contribution here. I think there needs to be a lot more groundwork, in terms of bringing an understanding of what the problem is and doing some basic science there.
“The ultimate role for regulation I see here though, is not that we’d forever have a regulatory regime that would prevent AI from being built, or forever prevent a dangerous AI. It’s more that it would be in place for the first phase of this, until we have developed some mature friendly superintelligence who could then help us police against any other attempts to create unfriendly AI. So it seems important to get the first wave right, and then we can, to some extent, delegate the subsequent problems to these superintelligent minds.”
Voss, on the other hand, does not see the need for such safeguards. “The scary stories that are being pushed by places like MIRI – are they plausible? Some of their arguments are so incredibly weak and poor that I’m really surprised that so many smart people are falling for them… It puzzles me.”
And, even if Voss thought we needed safeguards, he disagrees that they would be practical, or realistically possible. “I think any safeguards are going to rely on human psychology, and human psychology isn’t safe… it would be incredibly hard for any researcher or company owning an AI to abandon their project. AGI is not going to happen overnight; there will be many AGIs with IQs of 80 or 90 for many months or years.
“It’s going to sort of creep up on us – we’re going to be using these early AGI designs and they’re going to get better and better all the time. That’s going to be the period where one has some understanding of the dynamics of it, and hopefully can guide the application of AGI. I think the best we can do is be at the forefront, be rational, try and understand the dynamics as they evolve in the early stages.”
Putting a leash on AI
The simple fact is that the question of control over an intelligence substantially greater than our own is a difficult one to answer. For example, AI researcher Eliezer Yudkowsky created an experiment where he, assuming the role of an AI, had to convince someone to release the AI from a controlled box and unleash it onto the internet as a whole.
Each time he ran the experiment, he was able to achieve this, and that was with human intelligence. The view of many who cite this experiment is that a superintelligent AGI will inevitably be able to trick or convince its way out of any enclosure. Voss concurs: “I think it’s been pretty well demonstrated that an AGI that is smart enough to cause real damage will be smart enough to persuade people to let it roam free.”
[pullquote align=”right”]If machines don’t overthrow us, convert us into computers, or force us to sign up to Google+, we may reach the technological utopia that AI researchers dream of. [/pullquote]Despite not believing in these safeguards, Voss is hesitant to say that AGI will always be friendly, as it depends on who designs it. Military AI, or one created by a corporation for short term gain, could cause harm.
The safest way, he says, is to make the AI generally available. “Yes there is a risk, but to say that it will inevitably end in tears, that’s very sad because I hope it won’t lead to a backlash and prevent people moving forward on it.”
If we’re lucky, if machines don’t overthrow us, convert us into computers, or force us to sign up to Google+, we may reach the technological utopia that AI researchers dream of. We may live indefinitely, in total luxury, all thanks to a machine vastly more intelligent than us. And for this to happen, we have to give up control.
Intelligence explosion
It’s a point Barrat and Voss both agree upon, except both view the outcome in different lights. Barrat believes that “we offload more and more of our cognitive abilities now. And it’s great. [But] we’re still in the honeymoon period. I see us gradually ceding power and volition until some point where there’s an intelligence explosion. And then we won’t really have a choice any more; I think it’ll be kind of a bummer.”
Voss, meanwhile, is more excited for this future, envisioning a Siri-like personal assistant, but “way more powerful and much more personalised; It’ll be like people who have been married for a long time where, after a while you basically don’t know whose idea something was, because you’ll be consulting your personal assistant all the time for things and bouncing ideas off of it. So, after a while, you won’t know if it was your decision or your idea or your AGI’s.
“Initially one per cent of your decisions and memories are external in the personal assistant and, over the years, that becomes twenty per cent, thirty per cent; what if it becomes ninety per cent, and only ten per cent of thinking actually happens in your biological brain? Ninety per cent of your thinking and personality is in the external brain.
“That way we’ve really upgraded our intelligence, but the self has either moved into the device, or it’s one and the same. It’s a bit intimidating and scary, that thought, but that’s going to happen and I think it’s desirable.”
Voss’ notions may seem startling to many, but the issue is, in his view, just a matter of perception. “We still see it as an ‘us and them’, the ‘iPhone glued to my ear’, but it will truly be an expansion of the self… We won’t be ourselves without the AGI assistant.”
So, should AGI become a reality, we will have two possible futures, one where we are wiped out, another where our individuality, self determination and independence has been all but eroded, offloaded onto a greater being. In either circumstance, it’s hard not to think that humanity as we know it is going to become a very different thing entirely.
Illustration by Ida Amanda Ahopelto
Chilling and creepy. Great read!