The Future of Life Institute (FLI), a volunteer organization working to mitigate existential risks facing humanity, has just donated a total of $7 million to 37 research teams “to help keep [artificial intelligence] beneficial.”
Most of that funding is from the $10 million Elon Musk, one of FLI’s top donors, gave to the organization in January 2015. FLI says the money will be donated “over the next three years, with most of the research projects starting by September 2015. The winning teams will research a host of questions in computer science, law, policy, economics, and other fields relevant to coming advances in AI.”
FLI hasn’t called for companies to completely stop advancing AI, but the group has said that all work should be towards “safeguarding life.”
According to CNET, however, the grant winners “are not necessarily expected to beat up on AI. Many, in fact, are focused on understanding more about AI, how it could impact humanity, and other topics. A Duke University research project that netted $200,000 will study ethics and AI, while another from Rice University will spend its $69,000 on how AI will impact working in the future.”
Known for his businesses on the cutting edge of tech, such as Tesla and SpaceX, Musk is certainly not a fan of AI. At a conference at MIT in October 2014, Musk likened improving AI to “summoning the demon” and called it the human race’s biggest existential threat.
He’s also tweeted that AI could be more dangerous than nuclear weapons. Musk called for the establishment of national or international regulations on the development of AI.
Musk, of course, isn’t alone with his mindset. There are many others who believe AI could be troublesome for humans. Microsoft founder Bill Gates, for example, doesn’t think AI will bring trouble in the near future, but he says that could all change in a few decades.
“I am in the camp that is concerned about super intelligence,” Gates said during a recent Reddit Ask Me Anything. “First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though, the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”
And world-renowned physicist Stephen Hawking has been voicing this apocalyptic vision for a while. “The development of full artificial intelligence could spell the end of the human race,” he said. “It would take off on its own and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”