In a Reddit Ask Me Anything (AMA) on Wednesday, the Microsoft founder was asked whether or not we should be threatened by AI. While he doesn’t think AI will bring trouble in the near future, he says that could all change in a few decades.
“I am in the camp that is concerned about super intelligence,” Gates writes. “First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though, the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”
Gates’ stance on AI contradicts that of Microsoft research chief Eric Horvitz, who earlier this week said he doesn’t think AI poses a threat to human life. Here’s what Horvitz had to say:
“There have been concerns about the long-term prospect that we lose control of certain kinds of intelligences. I fundamentally don’t think that’s going to happen. I think that we will be very proactive in terms of how we field AI systems, and that in the end we’ll be able to get incredible benefits from machine intelligence in all realms of life, from science to education to economics to daily life.”
Horvitz said “over a quarter of all attention and resources” at his research unit are focused on AI. The division’s work on AI has produced Cortana, which is a voice-controlled virtual assistant for the Windows Phone that will shortly come to desktop PCs when Windows 10 is released.
Gates’ theory is in line with what many experts believe. Hawking has said AI “could spell the end of the human race.” And Musk, speaking at a MIT conference in October 2014, said AI is the biggest existential threat to the human race. The Tesla founder has also tweeted that AI could be more dangerous than nuclear weapons.
Google CEO Larry Page has also previously talked on the subject, but didn’t seem to express any concern.