In an interview with The Guardian, Russel said that experts were “spooked” by their own success in this field and compared the advance of artificial intelligence to the development of the atomic bomb.
It is hard to say when artificial intelligence could surpass human intelligence – the most optimistic scenarios say ten years or, on the other hand, a few hundred years. Most experts believe that machines more intelligent than humans would be developed this century. No one knows what will happen in this case, and we may not even want to find out, so Russell once again called for international treaties to regulate the development of AI.
Prof. Russel stressed that urgent action is needed to make sure humans remain in control of superintelligent AI. He warned that AI had been designed with a specific methodology and a general approach and that people are not careful enough to use such a system in real-world situations. He cited the frequently mentioned example of using AI to help treat a number of diseases in the future, but asking artificial intelligence to cure cancer as quickly as possible could be extremely dangerous. Namely, as AI is trying to find a way to cure cancer, it would run experiments on humans and probably find ways of inducing tumors in the entire human population. In short, artificial intelligence could use humans as guinea pigs in experiments that could prove fatal to all of humanity.
In fact, for machines to pose a serious risk to humans, they wouldn’t even need to be more intelligent than us, Russel notes, pointing to the algorithms that decide what people see on social media. With the recent publication of internal Facebook documents, we were able to find out just how harmful the content we digest from social networks can be, and the end result is that algorithms start manipulating users – effectively “brainwashing” them – so that their behavior becomes predictable.
“Yeah, I think we are increasingly spooked,” he told The Guardian, showing concern over the use of AI for military purposes. “It reminds me a little bit of what happened in physics where the physicists knew that atomic energy existed, they could measure the masses of different atoms, and they could figure out how much energy could be released if you could do the conversion between different types of atoms […] And then the atom bomb happened, and they weren’t ready for it.”
To avoid a number of potential problems, Russel believes that machines need to be developed so that they always check in with humans to meet their objectives, i.e. they must never act on their own. Finally, he emphasized that there is now a choice about what the future holds, and it’s really important for the public to be involved in those choices because they are the ones who will (or not) benefit from them.