Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Blizzard set to demo Google's DeepMind AI in StarCraft 2

Save for later
  • 3 min read
  • 23 Jan 2019

article-image

Blizzard, an American video game development company is all set to demonstrate the progress made by Google’s DeepMind AI at StarCraft II, a real-time strategy video game, tomorrow. “The StarCraft games have emerged as a "grand challenge" for the AI community as they're the perfect environment for benchmarking progress against problems such as planning, dealing with uncertainty and spatial reasoning”, says the Blizzard team.

Blizzard had partnered up with DeepMind during the 2016 BlizzCon, where they announced that they’re opening up the research platform for StarCraft II so that everyone in the StarCraft II community can contribute towards advancement in the AI research. Ever since then, much progress has made on the AI research front when it comes to StarCraft II. It was only two months back when, Oriol Vinyals, Research Scientist, Google DeepMind, shared the details of the progress that the AI had made in StarCraft II, states the Blizzard team. Vinyals stated how the AI, or agent, had learned to perform basic macro focused strategies along with defence moves against cheesy and aggressive tactics such as “cannon rushes”.

Blizzard also posted an update during BlizzCon 2018, stating that DeepMind had been working really hard at training their AI (or agent) to better understand and learn StarCraft II. “Once it started to grasp the basic rules of the game, it started exhibiting amusing behaviour such as immediately worker rushing its opponent, which actually had a success rate of 50% against the 'Insane' difficulty standard StarCraft II AI”, mentioned the Blizzard team.

It has almost become a trend for DeepMind to measure the capabilities of its advanced AI against human opponents in video games. For instance, it made headlines in 2016 when its AlphaGo AI program, managed to successfully defeat Lee Sedol, world champion, in a five-game match. AlphaGo had also previously defeated the professional Go player, Fan Hui in 2015 who was a three-time European champion of the game at the time. Also, recently in December 2018, DeepMind researchers published a full evaluation of its AlphaZero in the journal Science, confirming that it is capable of mastering Chess, Shogi, and Go from scratch.

Other examples of AI making its way into advanced game learning includes OpenAI Five, a team of AI algorithms that beat a team of amateur human video game players in Dota 2 – the popular battle arena game, back in June 2018. Later in August, it managed to beat semi-professional players at the Dota 2 game.

The demonstration for DeepMind AI in StarCraft II is all set for tomorrow at 10 AM Pacific Time. Check out StarCraft’s Twitch channel or DeepMind’s YouTube channel to learn about other recent developments that have been made.


Deepmind’s AlphaFold is successful in predicting the 3D structure of a protein making major inroads of AI use in healthcare

Graph Nets – DeepMind’s library for graph networks in Tensorflow and Sonnet

DeepMind open sources TRFL, a new library of reinforcement learning building blocks

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime