Humans must be ready for signs of robotic super-intelligence but should have enough time to address them, a top computer scientist has warned.
Oren Etzioni, CEO of Allen Institute for AI, penned a recent paper titled: “How to know if artificial intelligence is about to destroy civilisation.”
He wrote: “Could we wake up one morning dumbstruck that a super-powerful AI has emerged, with disastrous consequences?
“Books like Superintelligence by Nick Bostrom and Life 3.0 by Max Tegmark, as well as more recent articles, argue that malevolent super-intelligence is an existential risk for humanity.
“But one can speculate endlessly. It’s better to ask a more concrete, empirical question: What would alert us that super-intelligence is indeed around the corner?”
He likened warning signs to canaries in coal mines, which were used to detect carbon monoxide because they would collapse.
Prof Etzioni argued these warning signs come when AI programmes develop a new capability.
He continued for MIT Review: “Could the famous Turing test serve as a canary? The test, invented by Alan Turing in 1950, posits that human-level AI will be achieved when a person can’t distinguish conversing with a human from conversing with a computer.
“It’s an important test, but it’s not a canary; it is, rather, the sign that human-level AI has already arrived.
“Many computer scientists believe that if that moment does arrive, superintelligence will quickly follow. We need more intermediate milestones.”
But he did warn that the “automatic formulation of learning problems” would be the first canary, followed by self-driving cars.
He encouraged “limited self-driving cars” but that they would become a canary once “human-level driving” is achieved because driving requires “real-time decisions based on the unpredictable physical world and interaction with human drivers”.
Prof Etzioni then referenced AI doctors as the third because of the ability to understand people, language and medicine like a human doctor does.
And finally, he named the potential ability of AI to understand “people and their motivations” as a fourth canary.
He added: “I said to Alexa ‘my trophy doesn’t fit into my carry-on because it is too large. What should I do?’ Alexa’s answer was ‘I don’t know that one’.
“Since Alexa can’t reason about sizes of objects, it can’t decide whether ‘it’ refers to the trophy or to the carry-on. When AI can’t understand the meaning of ‘it’, it’s hard to believe it is poised to take over the world.
“If Alexa were able to have a substantive dialogue on a rich topic, that would be a fourth canary.”
Luckily, he believes his list demonstrates how far we are away from super-intelligence, and that we will have a comfortable amount of time to deploy “off-switches”.