AI is beating almost all of mankind at Starcraft

A new algorithm called AlphaStar is beating all but the very best human players at Starcraft. This is not only a remarkable achievement in itself, but it could teach AIs how to solve complex problems in other applications.

A typical Protoss-Zerg combat. Credits: DeepMind.

The foray of AIs in strategy games is not exactly a new thing. Google’s ‘Alpha’ class of AIs, in particular, has taken the world by storm with their prowess. They’re revolutionizing chess and Go — once thought to be insurmountable for an algorithm. Researchers have also set their eyes on other games (DOTA and Poker for instance), with promising but limited results. The sheer complexity of the game, mixed with the fact that you don’t have all the information available to you (as opposed to Go and chess, where you see the entire board freely), raised serious challenges for AIs.

But fret not — our algorithm friends are slowly overcoming them. A new Alpha AI, aptly called AlphaStar, has now reached a remarkable level of prowess, ranking in the top 98.5% of all Starcraft II players.

Starcraft is one of the most popular computer strategy games of all time. Its sequel, Starcraft II, features a very similar scenario. The players choose one of three races: the technologically advanced humans, the Protoss (masters of psionic energy), or the Zerg (quickly-evolving biological monsters). They then mine resources, build structures, an army, and try to destroy the opponent(s).

There are multiple viable strategies in Starcraft, and there’s no simple way to overcome your opponent. The infamous ‘fog of war’ also hides your opponent’s movements, so you also have to be prepared for whatever they are doing.

AlphaStar managed to reach Grandmaster Tier — a category reserved for only the best Starcraft players.

Credits: Deep Mind.

Having an AI that is this good at such a complex game would have been unimaginable a decade ago. The progress is so remarkable that one of the researchers at DeepMind, the company training and running these AIs called it a ‘defining moment’ in his career.

“This is a dream come true,” said Oriol Vinyals, lead, AlphaStar project, DeepMind. “I was a pretty serious StarCraft player 20 years ago, and I’ve long been fascinated by the complexity of the game. AlphaStar achieved Grandmaster level solely with a neural network and general-purpose learning algorithms – which was unimaginable 10 years ago when I was researching StarCraft AI using rules-based systems.

AlphaStar advances our understanding of AI in several key ways: multi-agent training in a competitive league can lead to great performance in highly complex environments, and imitation learning alone can achieve better results than we’d previously supposed.

I’m excited to begin exploring ways we can apply these techniques to real-world challenges, such as helping improve the robustness of AI systems. I’m incredibly proud of the team for all their hard work to get us to this point. This has been the defining moment of my career so far.”

The AI didn’t play with ‘AI cheats’ — it had to face the same constraints as human players:

  • it could only see the map through a camera as a human would;
  • it had to play through a server, not directly;
  • it had an built-in reaction time;
  • it had to select a race and play with it.

Even with all these, the AI did remarkably well.

Every single combat has multiple aspects of strategy involved. Credits: DeepMind.

At every given moment, a Starcraft player (or algorithm) has to choose from up to 10^26 possible actions, all of which have potentially significant consequences. Therefore, researchers took a different approach than with Go or chess. In these ancient games, the AIs learned by playing millions and millions of games, practicing and learning alone. In the Starcraft algorithm, however, some initial information had to be input into the framework.

This is called imitation learning — the AI was basically taught how to play the game. By doing this and combining it with neural network architectures, the AI was already better than most players. With more supervised learning, it was able to surpass all but the very best players in the world. This enabled it to learn from existing strategies, but also develop its own ideas.

“StarCraft has been a grand challenge for AI researchers for over 15 years, so it’s hugely exciting to see this work recognised in Nature. These impressive results mark an important step forward in our mission to create intelligent systems that will accelerate scientific discovery,” said Demis Hassabis, co-founder and CEO, DeepMind.

Professional Starcraft players were also impressed and thrilled to see the AI play out its game. As is the case with previous iterations of Alpha AIs, the algorithm came up with new and innovative tactics.

“AlphaStar is an intriguing and unorthodox player – one with the reflexes and speed of the best pros but strategies and a style that are entirely its own,” said Diego “Kelazhur” Schwimer, professional StarCraft II player for Panda Global. “The way AlphaStar was trained, with agents competing against each other in a league, has resulted in gameplay that’s unimaginably unusual; it really makes you question how much of StarCraft’s diverse possibilities pro players have really explored. Though some of AlphaStar’s strategies may at first seem strange, I can’t help but wonder if combining all the different play styles it demonstrated could actually be the best way to play the game.”

It’s an impressive milestone. It’s also one that could get us to think whether teaching AIs how to beat us in strategy war games is a good idea or not. But for now, at least, there’s no need to worry. AIs are very limited in their scope. They can get very good, but strictly at the task they are trained to do — they have no way of applying what they’ve learned in the computer game setting to a real-life war scenario, for instance.

Instead, this application could help researchers learn how to design better AIs for dealing with simple real-world scenarios, like maneuvering a robotic arm or operating efficient heating for smart homes.

The research was published in Nature.

Leave a Reply

Your email address will not be published. Required fields are marked *