AI becomes grandmaster in ‘fiendishly complex’ StarCraft II | Technology
DeepMind’s AlphaStar masters game dubbed ‘next grand challenge for AI’ in just 44 days.An artificial intelligence (AI) system has reached the highest rank of StarCraft II, the fiendishly complex and wildly popular computer game, in a landmark achievement for the field.
‘The challenge was to play like a human’: AI takes on the gamers Read more
DeepMind’s AlphaStar outperformed 99.8% of registered human players to attain grandmaster level at the game, which sees opponents build civilisations and battle their inventive, warmongering alien neighbours.
The AI system mastered the game after 44 days of training, which involved learning from recordings of the best human players and then going up against itself and versions of the programme that intentionally tested its weaknesses.
“AlphaStar has become the first AI system to reach the top tier of human performance in any professionally played e-sport on the full unrestricted game under professionally approved conditions,” said David Silver, a researcher at DeepMind .
More than $31m in prize money has been handed out from thousands of StarCraft II e-sport tournaments since the game was released in 2010.Players start with a small number of worker units that can gather resources, construct buildings, develop new units and technologies, and embark on scouting missions to gain intelligence on opponents.Top players need adaptable short and long-term strategies to grow and defend their bases while laying waste to the opposition.
To complicate matters, players cannot see the whole “map” of the game, so decisions are made on partial information.
“Ever since computers cracked Go , chess and poker , the game of StarCraft has emerged, essentially by consensus from the community, as the next grand challenge for AI,” Silver said.“It’s considered to be the game which is most at the limit of human capabilities.”
DeepMind created AlphaStar “agents” that play as each of the races in the game, namely Protoss, Terran and Zerg.Each race has different capabilities and technologies which favour distinct defensive and offensive strategies.
AlphaStar first learned the basics from watching top human players.Then it entered the “AlphaStar league” where it played not only itself, but “exploiter” agents, which are versions of the AI that specifically targeted AlphaStar’s weak points.The training ensured that AlphaStar became a formidable opponent against all three races and every strategy.
The DeepMind team restricted AlphaStar’s capabilities, ensuring for example that it could not perform moves at superhuman speeds.This proved crucial to the AI’s success, because instead of beating humans through speed alone, it was forced to learn winning long-term strategies.Oriol Vinyals, the lead researcher on the project, said AIs like AlphaStar could potentially be used to improve personal assistants, self-driving cars, weather predictions and climate models.The work was published in the scientific journal Nature .
Dan Klein, a professor of computer science at the University of California at Berkeley, who was not involved in the work, said it was an exciting achievement driven by key advances.“What’s great about StarCraft as a testbed for AI is that all its complexities happen at once,” he said.
While DeepMind states that it will never be involved in military work, and Starcraft II is not a realistic war simulation, the results will be of interest to the military, said Noel Sharkey, emeritus professor of AI and robotics at the University of Sheffield.
In March, a US government report described how AI enriched battlefield simulations and allowed wargamers to assess the potential outcomes of different tactics.
“Military analysts will certainly be eyeing the successful AlphaStar real-time strategies as a clear example of the advantages of AI for battlefield planning.But this is an extremely dangerous idea with the potential for humanitarian disaster.AlphaStar learns strategy from big data in one particular environment.The data from conflicts such as Syria and Yemen would be too sparse to be of use,” said Sharkey.
“And as DeepMind explained at a recent United Nations event, such methods would be highly dangerous for weapons control as the moves are unpredictable and can be creative in unexpected ways.This is against the laws that govern armed conflict.”
Topics DeepMind Artificial intelligence (AI) Computing Games esports news.