Machines have been beating humans for many years in games and contests: Chess, Scrabble, Checkers, Jeopardy! and today, the Chinese board game Go. Google's AI machine won 4 out of 5 games in a $1.3 million contest with Korean Lee Se-Dol, rated as the world's best Go player.
“I didn't expect to lose. I didn't think AlphaGo would play the game in such a perfect manner. Personally, I am regretful about the result, but would like to express my gratitude to everyone who supported and encouraged me throughout the match,” Sedol said to ABC News at a post-game press conference.
Google's Go-playing supercomputer, AlphaGo, was designed and operated by Google's London-based DeepMind AI team headed by Demis Hassabis.
The 2,500-year-old board game is purported to be more complex than chess. It is played with black and white stones and the aim is to surround more territory than the opponent. Go appears to be a simple game and begins with an empty board. Two players (one using black stones, the other white), alternate placing stones in squares, trying to grab territory without getting their pieces captured.
According to Alan Levinovitz writing in Wired a few years ago, “there are 400 possible board positions after the first round of moves in Chess and 129,960 in Go. There are 35 possible moves on any turn in a Chess game, and 250 for Go.” DeepMind's David Silver and Demis Hassabis noted that the number of possible board configurations in Go is larger than the number of atoms in the universe.
Drake Baer, in a Tech Insider piece describes DeepMind's history and role in the development of AlphaGo:
DeepMind didn't “program” AlphaGo with evaluations of “good” and “bad” moves. Instead, AlphaGo's algorithms studied a database of online Go matches, giving it the equivalent experience of doing nothing but playing Go for 80 years straight.
“This deep neural net is able to train and train and run forever on these thousands or millions of moves, to extract these patterns that leads to selection of good actions,” says Carnegie Mellon computer scientist Manuela Veloso, who studies agency in artificial intelligence systems.
Google acquired DeepMind in 2014. Founded in 2010 by chess prodigy-turned-artificial intelligence researcher Demis Hassabis, the company's mission is to “solve intelligence,” and it claims that “the algorithms we build are capable of learning for themselves directly from raw experience or data.” In February 2015, DeepMind revealed in Nature that the program learned to play vintage arcade games like Pong or Space Invaders as well as human players. Now it's about to master a game that once seemed unmasterable for artificial intelligence [Go].
A Google DeepMind paper describes the technical details here. But the Nature video below describes the game and introduces DeepMind's Demis Hassabis who describes the process required by the AI.
What's it all mean?
Go's appeal lies in depth through simplicity and is why it’s been so difficult for computers to master up until now. There’s limited data available from looking at the board. Choosing a good move demands a great deal of intuition. Until AlphaGo, no one had been able to build an effective evaluation function. But AlphaGo's use of deep learning and neural networks to teach itself to play enables it to have processed millions of Go positions and moves from human-played games and decide upon moves predicated on that experience.
Sam Byford wrote in The Verge:
The twist is that DeepMind continually reinforces and improves the system’s ability by making it play millions of games against tweaked versions of itself. This trains a “policy” network to help AlphaGo predict the next moves, which in turn trains a “value” network to ascertain and evaluate those positions. AlphaGo looks ahead at possible moves and permutations, going through various eventualities before selecting the one it deems most likely to succeed. The combined neural nets save AlphaGo from doing excess work: the policy network helps reduce the breadth of moves to search, while the value network saves it from having to internally play out the entirety of each match to come to a conclusion.
Thus AlphaGo gets better by playing itself. DeepMind believes that the principles it uses in AlphaGo have broader application and implications. Hassabis makes a distinction between “narrow” AIs like Deep Blue and artificial “general” intelligence (AGI), the latter being more flexible and adaptive. He thinks AlphaGo's machine learning techniques will be useful in robotics, smartphone assistant systems, and healthcare.
Intuition has been the sole domain of humans. But now AlphaGo, Google and DeepMind have shown that they can build systems with an effective evaluation function to make choices based on a growing universe of experiential data. Reading medical scans and souping up Siri and the other online resources and social assistants are just the tip of the iceberg. This technology has implications for self-driving vehicles of all types (air, land and sea), medical diagnostics and predictive stock picking to name just a few.
Match 5, Lee Sedol vs AlphaGo
Watch DeepMind's program AlphaGo take on the legendary Lee Sedol, the top Go player of the past decade, in a $1M 5-game challenge match in Seoul. Match commentary by Michael Redmond and Chris Garlock.