Author and film-maker james barrat's fears of the technological singularity are real. For us here at #itchysilk it was a chance opportunity to watch the film Ex-Machina (2014) written by Alex Garland that re-ignited our interest in this divisive area. In the film, the AI robot Ava eventually escapes into society having passed the Turing Test. In her/its wake she/it leaves her creator Nathan Bateman dead (by her/its own machinations) and Caleb (the un-witting subject used to see if she/it could pass the Turing Test) trapped and presumably fated to a slow agonising death.
Of course, many films and books the world over have covered the subject of AI and the repercussions to humans think; Stanley Kubrick's classic 2001 Space Odyssey (1968) to the book by Philip K Dick, Do Androids Dream Of Electric Sheep (1968). The book evidently forming the basis for the film Blade Runner (1982). Film, and books can be the vessels where we are prepared for the eventual reality. In this instance this is a scary prospect.
It is this scary and apocalyptic scenario that James Barrat's book, Our Final Invention (2013) plunges with aplomb and clarity. Far from the graded colours of hollywood, James Barrat states clearly that we as humans are not ready. With limited checks to the exponential growth of the technology needed to create the first AI, James Barrat argues that far from fears of nuclear annihilation we should be more worried that future AI will ultimately destroy us as humans. And if not destroy humans AI will certainly (and quickly) usurp our status as the most powerful on the planet.
In many ways it's not that hard to see that an AI would see us as a threat to its existent. We are (at present) an unchecked/checked creation of another being/God (should that be your belief).
‘Every mammal on this planet instinctively develops a natural equilibrium with their surrounding environment, but you humans do not…Human beings are a disease, a cancer of this planet. You are a plague, and we are the cure'. (Agent Smith The Matrix 1999).
So, let's first define for those reading-AI. What does it mean and how has it impacted our world now?
In simple terms, artificial intelligence is the science of giving machines abilities that previously were ours alone. Abilities such as; object identification, understanding and using language, translation, navigation, and many others. But here's a ‘gateway' thought for those who think AI is sterile and boring. It's the deepest exploration ever undertaken by science into what we are as humans. It involves logic, psychology, neuroscience, mobility, mathematics, perception, philosophy, and much more, in addition to computer science and asks us what we're trying to do when we attempt to mirror human cognition in a machine? It asks us what are we, and what is our superpower – intelligence? We're at the beginning of an AI revolution that will change every aspect of our lives and our environment, if it doesn't destroy us first.
Ray Kurzweil sees the technological singularity positively. Should we really fear ‘it'?
In its current use the singularity is a manufactured term that's naively optimistic. It was first coined by science fiction writer and mathematician Vernor Vinge. He said in the 1990s that sci-fi writers had a hard time writing about anything that could happen after smarter-than-human machines were invented. That's because you'd have to be super-intelligent yourself to know how they would behave and impact the world. He compared it to the event horizon of a black hole. That's the boundary beyond which not even light can escape the hole's gravitational pull. You can't see beyond it.
The inventor Ray Kurzweil rebranded the singularity to be a positive event — the point at which nano-info-bio-cogo technologies converge and solve all our problems, including mortality. It's the opposite of Vinge's term. I think the technological singularity, in Vinge's sense, is an extremely dangerous period in our history, not a positive one. It identifies a point when we will create superintelligence that we can control, or we'll create uncontrollable and destructive AI.
In your book you state there are stealth AI companies. Can you discuss that briefly? Is money the sole driver?
A giant economic wind propels the development of advanced AI. About $40 billion has been invested in AI since 2009 and the amount invested has doubled each year since 2009. McKinsey and Company estimate that by 2025 the space of AI and automation will be worth between $10 to $20 trillion. This will make it the economy's largest sector (that same year according to Gartner and Company, AI will perform one third of all jobs).
To avoid giving away any special insights they might have, some AI companies operate in secrecy. The last time I inquired, Peter Theil, for example, owned several ‘stealth' AI companies. Developing advanced AI in secret is more dangerous than developing nuclear weapons in secret. Money is the main driver of AI research, with weapons development second. Way down the hierarchy is the scientific study of intelligence.
In terms of intelligence why will we never reach the level of AI-are we too illogical?
Currently no AI system compares to human general intelligence, though AI beats humans in a variety of narrow tasks like playing some games, search, navigation, finding errors in legal contracts, and recently, object identification. AI can drive a car or perform search or translate languages or play Go but no cognitive architecture can do many things, as humans can. But while the intelligence of AI is growing exponentially, ours is not growing much if at all. Most AI researchers think creating a generally intelligent AI – called AGI or artificial general intelligence – is likely this century.
Asimov and the 3 rules. The technology of AI is speeding ahead of checks in place. Why and indeed are those who are creating the technology consciously ignoring the possible repercussions?
I refer you to my book, Our Final Invention (2013). Part of Chapter One discusses why Asimov's Laws don't work and were never intended to work in the real world. They were intended to generate dramatic conflict for fictional stories. The book in which they were introduced, I Robot (1950), is a collection of stories in which the laws never work because of conflicts among them, and unintended consequences.
Regarding safety checks on AI. When it comes to technology our innovation runs ahead of our stewardship, again and again. In my book I explore nuclear fission as an example of a dual use technology that was developed quickly and rashly without a maintenance plan. AI is being developed the same way. It is closely tracking fission especially in its rapid weaponization and covert development. I'm not sure we're smart enough to develop AI safely.
What checks do you think we need in place to avert the envisaged disaster?
We need transparency among AI companies and we need the companies and countries developing AI to be open to regulation. We need something equivalent to the IAEA (the International Atomic Energy Agency), which governs nuclear fission and ensures compliance with safety rules.
What are some of the ethical problems we face with a conscious AI?
Consciousness is a very difficult subject. We have a hard time defining what it is. Putting it aside for the moment, if we create machines that can suffer, that have expectations about the future that can be thwarted, we may have to consider their welfare and issues like robot slavery and exploitation. We may ultimately have to give them quasi-human status. But there's a long list of animals, some closely related to humans, which we know should have protectable rights. We should prioritize their welfare long before the robots'.
Talk about the technological explosion. Why is it such a frightening prospect in terms of AI and what are your fears?
The intelligence explosion is a theorem first written by statistician IJ Good in the 1960's. I wrote a chapter about him in my book. The intelligence explosion is an important concept in AI risk. The basic idea is that if we create machines that are better than us at everything we use our brains for, they'll be better than us at artificial intelligence research and development. Then the machines will set the pace of intelligence advancement, not us. They could quickly advance to a level of mathematical and logical intelligence thousands or millions of times greater than ours.
Demis Hassabis, the co-founder of AI company Deep Mind, recently stated that he was afraid that when the intelligence explosion became possible AI companies would fail to collaborate to mitigate its danger.
Many films have covered the topic of AI- Ex Machina perhaps most frightening in such respects. Why are we so fascinated with the creation of an AI robot-does it fall into our own needs to be God?
Good question. The desire to be Godlike, like the idea of God itself, is a powerful superstitious notion. It seems to be deeply embedded in us. We have been thinking about how to create humans or human-like automatons for a long time. In a long endnote in my book, I follow this predilection back to Ancient Greece and Rome. In modern times we can refer to frankenstein (1818) and the Golem of Prague.
Unless we very carefully make super-intelligent machines friendly towards humans they'll be ambivalent, and very dangerous.
How real is the fear that if/when AI reaches singularity that it will quickly see us as a threat to the world and maybe to ourselves?
The technological singularity, as described by Vinge, is a threat to the world because of the possibility of an out-of-control intelligence explosion. A machine many times smarter than we are would not necessarily want to hurt us. But for any goal it might pursue it will be useful for it to use all available resources in pursuit of that goal. These include the resources that keep us alive, and even the atoms that make up our bodies. Intelligent machines won't be kind by default. Being smarter doesn't mean being kinder. Unless we very carefully make super-intelligent machines friendly towards humans they'll be ambivalent, and very dangerous.
Is the ultimate intelligence one that fuses human and machine? Is that an even scarier idea than AI alone?
You bet. Ray Kurzweil and others postulate that if we can meld with the machines we can guide them ethically. But in our own political world we see there's no shortage of self-serving psychopaths who send whole nations to war for political ends. Corporate heads have a long, sinister track record of cutting corners, causing deaths and suffering, to improve the bottom line. Augmenting their brains with AI would make them more capable psychopaths. So, we must ask whose brains will be augmented, and why? Clearly augmentation isn't a solution to the intelligence explosion problem.
PURCHASE JAMES BARRAT'S BOOK OUR FINAL INVENTION
First image by Vincent Mattina
Second image unknown