Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech News - Game AI

12 Articles
article-image-deepmind-ais-alphastar-achieves-grandmaster-level-in-starcraft-ii-with-99-8-efficiency
Vincy Davis
04 Nov 2019
5 min read
Save for later

DeepMind AI’s AlphaStar achieves Grandmaster level in StarCraft II with 99.8% efficiency

Vincy Davis
04 Nov 2019
5 min read
Earlier this year in January, Google’s DeepMind AI AlphaStar had defeated two professional players, TLO and MaNa, at StarCraft II, a real-time strategy game. Two days ago, DeepMind announced that AlphaStar has now achieved the highest possible online competitive ranking, called Grandmaster level, in StarCraft II. This makes AlphaStar the first AI to reach the top league of a widely popular game without any restrictions. AplhaStar used the multi-agent reinforcement learning technique and rated above 99.8% of officially ranked human players. It was able to achieve the Grandmaster level for all the three StarCraft II races - Protoss, Terran, and Zerg. The DeepMind researchers have published the details of AlphaStar in the paper titled, ‘Grandmaster level in StarCraft II using multi-agent reinforcement learning’. https://twitter.com/DeepMindAI/status/1189617587916689408 How did AlphaStar achieve the Grandmaster level in StarCraft II? The DeepMind researchers were able to develop a robust and flexible agent by understanding the potential and limitations of open-ended learning. This helped the researchers to make AlphaStar cope with complex real-world domains. “Games like StarCraft are an excellent training ground to advance these approaches, as players must use limited information to make dynamic and difficult decisions that have ramifications on multiple levels and timescales,” states the blog post. The StarCraft II video game requires players to balance high-level economic decisions with individual control of hundreds of units. When playing this game, humans are under physical constraints which limits their reaction time and their rate of actions. Accordingly, AphaStar was also imposed with these constraints, thus making it suffer from delays due to network latency and computation time. In order to limit its actions per minute (APM), AphaStar’s peak statistics were kept substantially lower than those of humans. To align with the standard human movement, it also had a limited viewing of the portion of the map, AlphaStar could register only a limited number of mouse clicks and had only 22 non-duplicated actions to play every five seconds. AlphaStar uses a combination of general-purpose techniques like neural network architectures, imitation learning, reinforcement learning, and multi-agent learning. The games were sampled from a publicly available dataset of anonymized human replays, which were later trained to predict the action of every player. These predictions were then used to procure a diverse set of strategies to reflect the different modes of human play. Read More: DeepMind’s Alphastar AI agent will soon anonymously play with European StarCraft II players Dario “TLO” WÜNSCH, a professional starcraft II player says, “I’ve found AlphaStar’s gameplay incredibly impressive – the system is very skilled at assessing its strategic position, and knows exactly when to engage or disengage with its opponent. And while AlphaStar has excellent and precise control, it doesn’t feel superhuman – certainly not on a level that a human couldn’t theoretically achieve. Overall, it feels very fair – like it is playing a ‘real’ game of StarCraft.” According to the paper, AlphaStar had the 1026 possible actions available at each time step, thus it had to make thousands of actions before learning if it has won or lost the game. One of the key strategies behind AlphaStar’s performance was learning human strategies. This was necessary to ensure that the agents keep exploring those strategies throughout self-play. The researchers say, “To do this, we used imitation learning – combined with advanced neural network architectures and techniques used for language modeling – to create an initial policy which played the game better than 84% of active players.” AlphaStar also uses a latent variable to encode the distribution of opening moves from human games. This helped AlphaStar to preserve the high-level strategies and enabled it to represent many strategies within a single neural network. By training the advances in imitation learning, reinforcement learning, and the League, the researchers were able to train AlphaStar Final, the agent that reached the Grandmaster level at the full game of StarCraft II without any modifications. AlphaStar used a camera interface, which helped it get the exact information that a human player would receive. All the interface and restrictions faced by AlphaStar were approved by a professional player. Finally, the results indicated that general-purpose learning techniques can be used to scale AI systems to work in complex and dynamic environments involving multiple actors. AlphaStar’s great feat has got many people excited about the future of AI. https://twitter.com/mickdooit/status/1189604170489315334 https://twitter.com/KaiLashArul/status/1190236180501139461 https://twitter.com/JoshuaSpanier/status/1190265236571459584 Interested readers can read the research paper to check AlphaStar’s performance. Head over to DeepMind’s blog for more details. Google AI introduces Snap, a microkernel approach to ‘Host Networking’ Are we entering the quantum computing era? Google’s Sycamore achieves ‘quantum supremacy’ while IBM refutes the claim Smart Spies attack: Alexa and Google Assistant can eavesdrop or vish (voice phish) unsuspecting users, disclose researchers from SRLabs
Read more
  • 0
  • 0
  • 3957

article-image-pluribus-an-ai-bot-built-by-facebook-and-cmu-researchers-has-beaten-professionals-at-six-player-no-limit-texas-hold-em-poker
Sugandha Lahoti
12 Jul 2019
5 min read
Save for later

Pluribus, an AI bot built by Facebook and CMU researchers, has beaten professionals at six-player no-limit Texas Hold ’Em Poker

Sugandha Lahoti
12 Jul 2019
5 min read
Researchers from Facebook and Carnegie Mellon University have developed an AI bot that has defeated human professionals in six-player no-limit Texas Hold’em poker.   Pluribus defeated pro players in both “five AIs + one human player” format and a “one AI + five human players” format. Pluribus was tested in 10,000 games against five human players, as well as in 10,000 rounds where five copies of the AI  played against one professional. This is the first time an AI bot has beaten top human players in a complex game with more than two players or two teams. Pluribus was developed by Noam Brown of Facebook AI Research and Tuomas Sandholm of Carnegie Mellon University. Pluribus builds on Libratus, their previous poker-playing AI which defeated professionals at Heads-Up Texas Hold ’Em, a two-player game in 2017. Mastering 6-player Poker for AI bots is difficult considering the number of possible actions. First, obviously since this involves six players, the games have a lot more variables and the bot can’t figure out a perfect strategy for each game - as it would do for a two player game. Second, Poker involves hidden information, in which a player only has access to the cards that they see. AI has to take into account how it would act with different cards so it isn’t obvious when it has a good hand. Brown wrote on a Hacker News thread, “So much of early AI research was focused on beating humans at chess and later Go. But those techniques don't directly carry over to an imperfect-information game like poker. The challenge of hidden information was kind of neglected by the AI community. This line of research really has its origins in the game theory community actually (which is why the notation is completely different from reinforcement learning). Fortunately, these techniques now work really really well for poker.” What went behind Pluribus? Initially, Pluribus engages in self-play by playing against copies of itself, without any data from human or prior AI play used as input. The AI starts from scratch by playing randomly, and gradually improves as it determines which actions, and which probability distribution over those actions, lead to better outcomes against earlier versions of its strategy. Pluribus’s self-play produces a strategy for the entire game offline, called the blueprint strategy. This online search algorithm can efficiently evaluate its options by searching just a few moves ahead rather than only to the end of the game. Pluribus improves upon the blueprint strategy by searching for a better strategy in real time for the situations it finds itself in during the game. Real-time search The blueprint strategy in Pluribus was computed using a variant of counterfactual regret minimization (CFR). The researchers used Monte Carlo CFR (MCCFR) that samples actions in the game tree rather than traversing the entire game tree on each iteration. Pluribus only plays according to this blueprint strategy in the first betting round (of four), where the number of decision points is small enough that the blueprint strategy can afford to not use information abstraction and have a lot of actions in the action abstraction. After the first round, Pluribus instead conducts a real-time search to determine a better, finer-grained strategy for the current situation it is in. https://youtu.be/BDF528wSKl8 What is astonishing is that Pluribus uses very little processing power and memory, less than $150 worth of cloud computing resources. The researchers trained the blueprint strategy for Pluribus in eight days on a 64-core server and required less than 512 GB of RAM. No GPUs were used. Stassa Patsantzis, a Ph.D. research student appreciated Pluribus’s resource-friendly compute power. She commented on Hacker News, “That's the best part in all of this. I'm hoping that there is going to be more of this kind of result, signaling a shift away from Big Data and huge compute and towards well-designed and efficient algorithms.” She also said how this is significantly lesser than ML algorithms used at DeepMind and Open AI. “In fact, I kind of expect it. The harder it gets to do the kind of machine learning that only large groups like DeepMind and OpenAI can do, the more smaller teams will push the other way and find ways to keep making progress cheaply and efficiently”, she added. Real-life implications AI bots such as Pluribus give a better understanding of how to build general AI that can cope with multi-agent environments, both with other AI agents and with humans. A six-player AI bot has better implications in reality because two-player zero-sum interactions (in which one player wins and one player loses) are common in recreational games, but they are very rare in real life.  These AI bots can be used for handling harmful content, dealing with cybersecurity challenges, or managing an online auction or navigating traffic, all of which involve multiple actors and/or hidden information. Apart from fighting online harm, four-time World Poker Tour title holder Darren Elias helped test the program's skills, said, Pluribus could spell the end of high-stakes online poker. "I don't think many people will play online poker for a lot of money when they know that this type of software might be out there and people could use it to play against them for money." Poker sites are actively working to detect and root out possible bots. Brown, Pluribus' developer, on the other hand, is optimistic. He says it's exciting that a bot could teach humans new strategies and ultimately improve the game. "I think those strategies are going to start penetrating the poker community and really change the way professional poker is played," he said. For more information on Pluribus and it’s working, read Facebook’s blog. DeepMind’s Alphastar AI agent will soon anonymously play with European StarCraft II players Google DeepMind’s AI AlphaStar beats StarCraft II pros TLO and MaNa OpenAI Five bots destroyed human Dota 2 players this weekend
Read more
  • 0
  • 0
  • 3887

article-image-deepminds-alphastar-ai-agent-will-soon-anonymously-play-with-european-starcraft-ii-players
Sugandha Lahoti
11 Jul 2019
4 min read
Save for later

DeepMind's Alphastar AI agent will soon anonymously play with European StarCraft II players

Sugandha Lahoti
11 Jul 2019
4 min read
Earlier this year, DeepMind’s AI Alphastar defeated two professional players at StarCraft II, a real-time strategy video game. Now, European Starcraft II players will get a chance to face off experimental versions of AlphaStar, as part of ongoing research into AI. https://twitter.com/MaxBakerTV/status/1149067938131054593 AlphaStar learns by imitating the basic micro and macro-strategies used by players on the StarCraft ladder. A neural network was trained initially using supervised learning from anonymised human games released by Blizzard. Once the agents get trained from human game replays, they’re then trained against other competitors in the “AlphaStar league”. This is where a multi-agent reinforcement learning process starts. New competitors are added to the league (branched from existing competitors). Each of these agents then learns from games against other competitors. This ensures that each competitor performs well against the strongest strategies, and does not forget how to defeat earlier ones. Anyone who wants to participate in this experiment will have to opt into the chance to play against the StarCraft II program. There will be an option provided in the in-game pop-up window. Users can alter their opt-in selection at any time. To ensure anonymity, all games will be blind test matches. European players that opt-in won't know if they've been matched up against AlphaStar. This will help ensure that all games are played under the same conditions, as players may tend to react differently when they know they’re against an AI. A win or a loss against AlphaStar will affect a player’s MMR (Matchmaking Rating) like any other game played on the ladder. "DeepMind is currently interested in assessing AlphaStar’s performance in matches where players use their usual mix of strategies," Blizzard said in its blog post. "Having AlphaStar play anonymously helps ensure that it is a controlled test, so that the experimental versions of the agent experience gameplay as close to a normal 1v1 ladder match as possible. It also helps ensure all games are played under the same conditions from match to match." Some people have appreciated the anonymous testing feature. A Hacker News user commented, “Of course the anonymous nature of the testing is interesting as well. Big contrast to OpenAI's public play test. I guess it will prevent people from learning to exploit the bot's weaknesses, as they won't know they are playing a bot at all. I hope they eventually do a public test without the anonymity so we can see how its strategies hold up under focused attack.” Others find it interesting to see what happens if players know they are playing against AlphaStar. https://twitter.com/hardmaru/status/1149104231967842304   AlphaStar will play in Starcraft’s three in-universe races (Terran, Zerg, or Protoss). Pairings on the ladder will be decided according to normal matchmaking rules, which depend on how many players are online while AlphaStar is playing. It will not be learning from the games it plays on the ladder, having been trained from human replays and self-play. The Alphastar will also use a camera interface and more restricted APMs. Per the blog post, “AlphaStar has built-in restrictions, which cap its effective actions per minute and per second. These caps, including the agents’ peak APM, are more restrictive than DeepMind’s demonstration matches back in January, and have been applied in consultation with pro players.” https://twitter.com/Eric_Wallace_/status/1148999440121749504 https://twitter.com/Liquid_MaNa/status/1148992401157054464   DeepMind will be benchmarking the performance of a number of experimental versions of AlphaStar to enable DeepMind to gather a broad set of results during the testing period. DeepMind will use a player’s replays and the game data (skill level, MMR, the map played, race played, time/date played, and game duration) to assess and describe the performance of the AlphaStar system. However, Deepmind will remove identifying details from the replays including usernames, user IDs and chat histories. Other identifying details will be removed to the extent that they can be without compromising the research DeepMind is pursuing. For now, AlphaStar agents will play only in Europe. The research results will be released in a peer-reviewed scientific paper along with replays of AlphaStar’s matches. Google DeepMind’s AI AlphaStar beats StarCraft II pros TLO and MaNa; wins 10-1 against the gamers Deepmind’s AlphaZero shows unprecedented growth in AI, masters 3 different games Deepmind’s AlphaFold is successful in predicting the 3D structure of a protein making major inroads for AI use in healthcare
Read more
  • 0
  • 0
  • 2887
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-game-rivals-microsoft-and-sony-form-a-surprising-cloud-gaming-and-ai-partnership
Sugandha Lahoti
17 May 2019
3 min read
Save for later

Game rivals, Microsoft and Sony, form a surprising cloud gaming and AI partnership

Sugandha Lahoti
17 May 2019
3 min read
Microsoft and Sony have been fierce rivals when it comes to gaming starting from 2001 when Microsoft’s Xbox challenged the Sony PlayStation 2. However, in an unusual announcement yesterday, Microsoft and Sony signed a memorandum of understanding to jointly explore the development of future cloud solutions in Microsoft Azure to support their respective game and content-streaming services. Sony and Microsoft will also explore collaboration in the areas of semiconductors and AI. For semiconductors, they will jointly develop new intelligent image sensor solutions.  In terms of AI, the parties will incorporate Microsoft’s AI platform and tools in Sony’s consumer products. Microsoft in a statement said,  “these efforts will also include building better development platforms for the content creator community,” seemingly stating that both companies will probably partner on future services aimed at creators and the gaming community. Rivals turned to Allies Sony’s decision to keep aside the rivalry and partner with Microsoft makes sense because of two main reasons. First, cloud streaming is considered the next big thing in gaming. Only three companies Microsoft, Google, and Amazon have enough cloud experience to present viable, modern cloud infrastructure. Although Sony has enough technical competence to build its own cloud streaming service, it makes sense to deploy via Microsoft’s Azure than scaling its own distribution systems. Microsoft is also happy to extend a welcoming hand to a customer as large as Sony. Moreover, neither Sony nor Microsoft is going to commit to focus on game streaming completely, as both already have consoles currently in development. This is unlike Amazon and Google, who are going to go full throttle in building game streaming. It makes sense that Sony chose to go with Microsoft putting enough resources into these efforts, and going so far as to collaborate, showing that they understand game streaming is something not to be looked down on for not having. Second, this partnership is also likely a direct competition to Google’s Stadia game streaming service, unveiled at Game Developers Conference 2019. Stadia is a cloud-based game streaming platform that aims to bring together, gamers, YouTube broadcasters, and game developers “to create a new experience”. The games get streamed from any data center to any device that can connect to the internet like TV, laptop, desktop, tablet, or mobile phone. Gamers can access their games anytime and virtually on any screen. And, game developers will be able to use nearly unlimited resources for developing games. Since all the graphics processing happens on off-site hardware, there will be little stress on your local hardware. “Sony has always been a leader in both entertainment and technology, and the collaboration we announced today builds on this history of innovation,” says Microsoft CEO Satya Nadella. “Our partnership brings the power of Azure and Azure AI to Sony to deliver new gaming and entertainment experiences for customers.” Twitter was filled with funny memes on this alliance and its direct contest with Stadia. https://twitter.com/MikieDaytona/status/1129076134950445056 https://twitter.com/shaunlabrie/status/1129144724646813696 https://twitter.com/kettleotea/status/1129142682004205569 Going forward, the two companies will share additional information when available. Read the official announcement here. Google announces Stadia, a cloud-based game streaming service, at GDC 2019 Microsoft announces Project xCloud, a new Xbox game streaming service Amazon is reportedly building a video game streaming service, says Information
Read more
  • 0
  • 0
  • 3089

article-image-obstacle-tower-environment-2-0-unity-announces-round-2-of-its-obstacle-tower-challenge-to-test-ai-game-players
Sugandha Lahoti
15 May 2019
2 min read
Save for later

Obstacle Tower Environment 2.0: Unity announces Round 2 of its ‘Obstacle Tower Challenge’ to test AI game players

Sugandha Lahoti
15 May 2019
2 min read
At the end of January, Unity announced the ‘Obstacle Tower Challenge’ to test AI game players. The Obstacle Tower Challenge examines how AI software performs in computer vision, locomotion skills, and high-level planning. The challenge began on 11th February and will run through 24th May. Round 1 ran from 11th Feb till 31st March and the results are just in. For the first round of the challenge, Unity received 2000+ entries from 350+ teams. Now, Unity has announced the launch of the second round of the challenge. Teams who trained an agent in round one and received an average score of five on unseen versions of the tower will advance for round 2. Agents will need to account for a variety of new challenges in Obstacle Tower Environment 2.0 including enemies to dodge, distractions to avoid, and more complicated floor layouts with circling paths. What’s new in the Obstacle Tower Environment 2.0? Unity has expanded the floors in the tower from 25 to 100 with three new visual styles - Industrial, Modern, and Future. The higher floors also contain new challenges apart from the ones already present such as enemies to dodge, distracting TVs to avoid, more complex floor layouts with circling paths, and larger rooms on each floor with additional platforming challenges. Obstacle Tower Environment 2.0 has expanded on the number of available parameters which can be customized when resetting the environment. These include the ability to change things like the lighting, visual theme, floor layouts, and room contents on the floors in the tower. They have also worked on the placement of the reset button in puzzle rooms which, based on feedback from round 1, was unintuitive. So Unity has now separated out the block, goal, and reset button positions in these rooms, to make it less likely that the agent will press the reset button by accident. The Obstacle Tower Environment natively supports the Unity ML-Agents Toolkit. To learn more about the environment, you can go through their research paper. Unity has also released the final list of contestants selected for Round 2. Unity has launched the ‘Obstacle Tower Challenge’ to test AI game players Unity updates its TOS, developers can now use any third party service that integrate into Unity. Improbable says Unity blocked SpatialOS; Unity responds saying it has shut down Improbable and not Spatial OS.
Read more
  • 0
  • 0
  • 3533

article-image-epic-games-at-gdc-announces-epic-megagrants-rtx-powered-ray-tracing-demo-and-free-online-services-for-game-developers
Natasha Mathur
22 Mar 2019
4 min read
Save for later

Epic Games announces: Epic MegaGrants, RTX-powered Ray tracing demo, and free online services for game developers

Natasha Mathur
22 Mar 2019
4 min read
Epic Games, an American video game and software development company, made a series of announcements, earlier this week. These include: Epic Game’s CEO, Tim Sweeney to offer $100 million in grants to game developers Stunning RTX-powered Ray-Tracing Demo named Troll Epic’s free Online Services launch for game developers Epic MegaGrants: $100 million funds to Game Developers Tim Sweeney, CEO, Epic Games Inc, announced earlier this week that he will be offering $100 million in grants to game developers to boost the growth of the gaming industry. Sweeney made the announcement during a presentation on Wednesday at the Game Developers Conference (GDC). GDC is the world's largest professional game industry event that ended yesterday in San Francisco. Epic Games also created a $5 million fund for grants that have been disbursed over the last three years. Now Epic Games is off to build a new fund called Epic MegaGrants. These are “no-strings-attached” grants, meaning that they don’t consist of any contracts requiring game developers to do anything for Epic. All that game developers need to do is apply for the grants, create an innovative project, and if the Epic’s judges find it worthy, they’ll offer them the funds. “There are no commercial hooks back to Epic. You don’t have to commit to any deliverables. This is our way of sharing Fortnite’s unbelievable success with as many developers as we can”, said Sweeney. Troll: a Ray Tracing Unreal Engine 4 Demo Another eye-grabbing moment at GDC this year was a “visually stunning” ray tracing demo revealed by Goodbye Kansas and Deep Forest Films called "Troll”. Troll was rendered in real time using Unreal Engine 4.22 ray tracing and camera effects. And powered by a NVIDIA’s single GeForce RTX 2080 Ti graphics card.  Troll is visually inspired by Swedish painter and illustrator John Bauer, whose illustrations are famous for Swedish folklore and fairy tales anthology known as ‘Among Gnomes and Trolls’. https://www.youtube.com/watch?v=Qjt_MqEOcGM                                                            Troll “Ray tracing is more than just reflections — it’s about all the subtle lighting interactions needed to create a natural, beautiful image. Ray tracing adds these subtle lighting effects throughout the scene, making everything look more real and natural,” said Nick Penwarden, Director of Engineering for Unreal Engine at Epic Games. NVIDIA team states in a blog post that Epic Games has been working to integrate RTX-accelerated ray tracing into its popular Unreal Engine 4. In fact, Unreal Engine 4.22 will have the support for new Microsoft DXR API for real-time ray tracing. Epic’s free online services launch for game developers Epic Games also announced the launch of free tools and services, part of the Epic Online Services, which was announced in December 2018. The SDK is available via the new developer portal for immediate download and use. SDK currently supports Windows, Mac, and Linux. Moreover, the SDK, as a part of the release, provides support for two free services, namely, game analytics and player ticketing. Game analytics help developers understand player behavior. It features DAU (Daily active users), MAU (Monthly active users), retention, new player counts, game launch counts, online user count, and more. The ticketing system connects players directly with developers and allows them to report bugs or other problems. These two services will continue to evolve along with the rest of Epic Online Services (EOS) to offer infrastructure and tools required by the developers to launch, operate, and scale the high-quality online games. Epic games will also be offering additional free services throughout 2019, including player data storage, player reports, leaderboards & stats, player identity, player inventory, matchmaking etc. “We are committed to developing EOS with features that can be used with any engine, any store and that can support any major platform...these services will allow developers to deliver cross-platform gameplay experiences that enable players to enjoy games no matter what platform they play on”, states the Epic Games team. Fortnite server suffered a minor outage, Epic Games was quick to address the issue Epic games CEO calls Google “irresponsible” for disclosing the security flaw in Fortnite Android installer Fortnite creator Epic games launch Epic games store where developers get 88% of revenue earned
Read more
  • 0
  • 0
  • 3892
article-image-openai-introduces-neural-mmo-a-multiagent-game-environment-for-reinforcement-learning-agents
Amrata Joshi
06 Mar 2019
3 min read
Save for later

OpenAI introduces Neural MMO, a multiagent game environment for reinforcement learning agents

Amrata Joshi
06 Mar 2019
3 min read
On Monday, the team at OpenAI launched at Neural MMO (Massively Multiplayer Online Games), a multiagent game environment for reinforcement learning agents. It will be used for training AI in complex, open-world environments. This platform supports a large number of agents within a persistent and open-ended task. The need for Neural MMO Since the past few years, the suitability of MMOs for modeling real-life events has been explored. But there are two main challenges for multiagent reinforcement learning. Firstly, there is a need to create open-ended tasks with high complexity ceiling as the current environments are complex and narrow. The other challenge, the OpenAI team specifies is the need for more benchmark environments in order to quantify learning progress in the presence of large population scales. Different criteria to overcome challenges The team suggests certain criteria which need to be met by the environment to overcome the challenges. Persistence Agents can concurrently learn in the presence of other learning agents without the need of environment resets. The strategies should adapt to rapid changes in the behaviors of other agents and also consider long time horizons. Scale Neural MMO supports a large and variable number of entities. The experiments by the OpenAI team consider up to 100M lifetimes of 128 concurrent agents in each of 100 concurrent servers. Efficiency As the computational barrier to entry is low, effective policies can be trained on a single desktop CPU. Expansion The Neural MMO is designed to update new content. The core features include food and water foraging system, procedural generation of tile-based terrain, and a strategic combat system. There are opportunities for open-source driven expansion in the future. The Environment Players can join any available server while each containing an automatically generated tile-based game map of configurable size. Some tiles are traversable, such as food-bearing forest tiles and grass tiles, while others, such as water and solid stone, are not. Players are required to obtain food and water and avoid combat damage from other agents, in order to sustain their health. The platform comes with a procedural environment generator and visualization tools for map tile visitation distribution, value functions, and agent-agent dependencies of learned policies. The team has trained a fully connected architecture using vanilla policy gradients, with a value function baseline and reward discounting as the only enhancements. The team has converted variable length observations, such as the list of surrounding players, into a single length vector by computing the maximum across all players. Neural MMO has resolved a couple of limitations of previous game-based environments, but there are still many left unsolved. Few users are excited about this news. One of the users commented on HackerNews, “What I find interesting about this is that the agents naturally become pacifists.” While a few others think that the company should come up with novel ideas and not copied ones. Another user commented on HackerNews, “So far, they are replicating known results from evolutionary game theory (pacifism & niches) to economics (distance & diversification). I wonder when and if they will surprise some novel results.” To know more about this news, check out OpenAI’s official blog post. AI Village shares its perspective on OpenAI’s decision to release a limited version of GPT-2 OpenAI team publishes a paper arguing that long term AI safety research needs social scientists OpenAI’s new versatile AI model, GPT-2 can efficiently write convincing fake news from just a few words
Read more
  • 0
  • 0
  • 3518

article-image-unity-has-launched-the-obstacle-tower-challenge-to-test-ai-game-players
Sugandha Lahoti
29 Jan 2019
2 min read
Save for later

Unity has launched the ‘Obstacle Tower Challenge’ to test AI game players

Sugandha Lahoti
29 Jan 2019
2 min read
Unity has announced a video game challenge, the Obstacle tower challenge which will test the vision, control, planning, and generalization capabilities of AI software. The Obstacle Tower Challenge will use a game-like environment of platform-style gameplay with puzzles and planning problems inside a tower setting maneuvering almost 100 floors. The challenge will examine how an AI software performs in computer vision, locomotion skills, and high-level planning. The challenge will begin on Monday, February 11 and will run through Friday, May 24. As the challenge opens, participants can review all the rules and regulations, download the Starter Kit and begin training their agents. Round 1, which will run from February 11 to March 31, will have participants playing up to Floor 25 of the Obstacle Tower. The winners will proceed to round 2 which will have 100 floors, post which the winners will be announced June 14. The participants will have the opportunity to win prizes in the form of cash, travel vouchers, and Google Cloud Platform credits, valued at over $100,000. “Each of the Tower floors are procedurally-generated, which means an AI agent must not only be able to solve a single version of the Tower but any arbitrary version as well. In this way, we’re testing the generalization ability of agents, a key capability that has not often been analyzed by benchmarks in the past.” said Danny Lange, Vice President of AI and Machine Learning, Unity Technologies. The end goal of this challenge is to bring up new AI research and solve new problems in reinforcement learning.” AI has been making great progress in conquering high-profile games. Recently, Google DeepMind’s AI AlphaStar defeated StarCraft II pros TLO and MaNa and won 10-1 against the gamers. Unity updates its TOS, developers can now use any third party service that integrate into Unity. Improbable says Unity blocked SpatialOS; Unity responds saying it has shut down Improbable and not SpatialOS. Unity and Baidu collaborate for simulating the development of autonomous vehicles
Read more
  • 0
  • 0
  • 3404

article-image-unity-ml-agents-toolkit-v0-6-gets-two-updates-improved-usability-of-brains-and-workflow-for-imitation-learning
Sugandha Lahoti
19 Dec 2018
2 min read
Save for later

Unity ML-Agents Toolkit v0.6 gets two updates: improved usability of Brains and workflow for Imitation Learning

Sugandha Lahoti
19 Dec 2018
2 min read
Unity ML-agents toolkit v0.6 is getting two major enhancements, announced the Unity team in a blog post on Monday. The first update turns Brains from MonoBehaviors to ScriptableObjects improving their usability. The second update allows developers to record expert demonstrations and use them for offline training, providing a better user workflow for Imitation Learning. Brains are now ScriptableObjects Brains were GameObjects that were attached as children to the Academy GameObject in previous versions of ML-Agents Toolkit. This made it difficult to re-use Brains across Unity scenes within the same project. In the v0.6 release, Brains are Scriptable objects, making them manageable as standard Unity assets. This makes it easy to use them across scenes and to create Agents’ Prefabs with Brains pre-attached. The Unity team has come up with the Learning Brain Scriptable Object that replaces the previous Internal and External Brains. It has also introduced Player and Heuristic Brain Scriptable Objects to replace the Player and Heuristic Brain Types, respectively. Developers can no longer change the type of Brain with the Brain Type drop down and need to create a different Brain for Player and Learning from the Assets menu. The BroadcastHub in the Academy Component can keep a track of which Brains are being trained. Record expert demonstrations for offline training The Demonstration Recorder allows users to record the actions and observations of an Agent while playing a game. These recordings can be used to train Agents at a later time via Imitation Learning or to analyze the data. Basically, Demonstration recorder helps training data for multiple training sessions, rather than capturing it every time. Users can add the Demonstration Recorder component to their Agent, check Record and give the demonstration a name. To train an Agent with the recording, users can modify the Hyperparameters in the training configuration. Check out the documentation on Github for more information. Read more about the new enhancements on Unity Blog. Getting started with ML agents in Unity [Tutorial] Unity releases ML-Agents toolkit v0.5 with Gym interface, a new suite of learning environments Unite Berlin 2018 Keynote: Unity partners with Google, launches Ml-Agents ToolKit 0.4, Project MARS and more
Read more
  • 0
  • 0
  • 4024
article-image-anthony-levandowski-announces-pronto-ai-and-makes-a-coast-to-coast-self-driving-trip
Sugandha Lahoti
19 Dec 2018
2 min read
Save for later

Anthony Levandowski announces Pronto AI and makes a coast-to-coast self-driving trip

Sugandha Lahoti
19 Dec 2018
2 min read
Anthony Levandowski is back in the self-driving space with a new company. Pronto AI. This Tuesday, he announced on a blog post on Medium that he has completed a trip across the country in a self-driving car without any human intervention. He is also developing a $5,000 aftermarket driver assistance system for semi-trucks, which will handle the steering, throttle, and brakes on the highway. https://twitter.com/meharris/status/1075036576143466497 Previously, Levandowski has been at the center of a controversy between Alphabet’s self-driving car company Waymo and Uber. Levandowski had allegedly taken with him confidential documents over which the companies got into a legal battle. He was briefly barred from the autonomous driving industry during the trial. However, the companies settled the case early this year. After laying low for a while, he is back with Pronto AI and it’s first ADAS ( advanced driver assistance system). “I know what some of you might be thinking: ‘He’s back?’” Levandowski wrote in his Medium post announcing Pronto’s launch. “Yes, I’m back.” Levandowski told the Guardian that he traveled in a self-driving vehicle from San Francisco to New York without human intervention. He didn't touch the steering wheel or pedals — except for periodic rest stops — for the full 3,099 miles. He posted a video that shows a portion of the drive, though it's hard to fact-check the full journey. The car was a modified Toyota Prius which used only video cameras, computers, and basic digital maps to make the cross-country trip. In the medium blog post, he also announced the development of a new camera-based ADAS. Named Copilot by Pronto, it delivers advanced features, built specifically for Class 8 vehicles, with driver comfort and safety top of mind. It will also offer lane keeping, cruise control and collision avoidance for commercial semi-trucks and will be rolled out in early 2019. Alphabet’s Waymo to launch the world’s first commercial self-driving cars next month Apex.AI announced Apex.OS and Apex.Autonomy for building failure-free autonomous vehicles Uber manager warned the leadership team of the inadequacy of safety procedures in their prototype robo-taxis early March, reports The Information
Read more
  • 0
  • 0
  • 3076

article-image-unity-releases-ml-agents-toolkit-v0-5-with-gym-interface-a-new-suite-of-learning-environments
Sugandha Lahoti
12 Sep 2018
2 min read
Save for later

Unity releases ML-Agents toolkit v0.5 with Gym interface, a new suite of learning environments

Sugandha Lahoti
12 Sep 2018
2 min read
In their commitment to become the go-to platform for Artificial Intelligence, Unity has released a new version of their ML-Agents Toolkit.  ML-Agents toolkit v0.5 comes with more flexible action specification, a Gym interface for researchers to more easily integrate ML-Agents environments into their training workflows, and a new suite of learning environments replicating some of the Continuous Control benchmarks used in Deep Reinforcement Learning. They have also released a research paper on ML-Agents which the Unity platform has titled “Unity: A General Platform for Intelligent Agent.” Changes to the ML-Agents toolkit v0.5 A lot of changes have been made pertaining to ML-Agents toolkit v0.5. Highlighted changes to repository structure The python folder has been renamed ml-agents. It now contains a python package called mlagents. The unity-environment folder, containing the Unity project, has been renamed UnitySDK. The protobuf definitions used for communication have been added to a new protobuf-definitions folder. Example curricula and the trainer configuration file have been moved to a new config sub-directory. New features New package gym-unity which provides gym interface to wrap UnityEnvironment. The ML-Agents toolkit v0.5 can now run multiple concurrent training sessions with the --num-runs=<n> command line option. Added Meta-Curriculum which supports curriculum learning in multi-brain environments. Action Masking for Discrete Control which makes it possible to mask invalid actions each step to limit the actions an agent can take. Fixes & Performance Improvements Replaced some activation functions to swish. Visual Observations use PNG instead of JPEG to avoid compression losses. Improved python unit tests. Multiple training sessions are available on single GPU. Curriculum lessons are now tracked correctly. Developers can now visualize value estimates when using models trained with PPO from Unity with GetValueEstimate(). It is now possible to specify which camera the Monitor displays to. Console summaries will now be displayed even when running inference mode from python. Minimum supported Unity version is now 2017.4. You can read all about the new version of ML-Agents Toolkit on the Unity Blog. Unity releases ML-Agents v0.3: Imitation Learning, Memory-Enhanced Agents and more. Unity Machine Learning Agents: Transforming Games with Artificial Intelligence. Unite Berlin 2018 Keynote: Unity partners with Google, launches Ml-Agents ToolKit 0.4, Project MARS and more.
Read more
  • 0
  • 0
  • 3955

article-image-unite-berlin-2018-keynote-unity-partners-with-google-launches-ml-agents-toolkit-0-4-project-mars-and-more
Sugandha Lahoti
20 Jun 2018
5 min read
Save for later

Unite Berlin 2018 Keynote: Unity partners with Google, launches Ml-Agents ToolKit 0.4, Project MARS and more

Sugandha Lahoti
20 Jun 2018
5 min read
Unite Berlin 2018, the Unity annual developer conference, kicked off on June 19’ 2018. This three-day extravaganza will take you through a thrilling ride filled with new announcements, sessions, and workshops from the amazing creators of Unity. It’s a place to develop, network, and participate with artists, developers, filmmakers, researchers, storytellers and other creators. Day 1 was inaugurated with the promising Keynote, presented by John Riccitiello, CEO of Unity Technologies. It featured previews of upcoming unity technology, most prominently Unity’s alliance with Google Cloud to help developers build connected games. Let’s take a look at what was showcased. Connected Games with Unity and Google Cloud Unity and Google Cloud have collaborated for helping developers create real-time multiplayer games. They are building a suite of managed services and tools to help developers, test, and run connected experiences while offloading the hard work of quickly scaling game servers to Google Cloud. Games can be easily scaled to meet the needs of the players. Game developers can harness the massive power of Google cloud without having to be a cloud expert. Here’s what Google Cloud with Unity has in store: Game-Server Hosting: Streamlined resources to develop and scale hosted multiplayer games. Sample FPS: A production-quality sample project of a real-time multiplayer game. New ECS Networking Layer: Fast, flexible networking code that delivers performant multiplayer by default. Unity ML-Agents Toolkit v0.4 A new version of Unity ML-Agents Toolkit was also announced at Unite Berlin. The v0.4 toolkit hosts multiple updates as requested by the Unity community. Game developers now have the option to train environments directly from the Unity editor, rather than as built executables. Developers can simply launch the learn.py script, and then press the “play” button from within the editor to perform training. They have also launched a set of two new challenging environments, Walker and Pyramids. Walker is physics-based humanoid ragdoll and Pyramids is a complex sparse-reward environment. There are also algorithmic improvements in reinforcement learning. Agents are now trained to learn to solve tasks that were previously learned with great difficulty. Unity is also partnering with Udacity to launch Deep Reinforcement Learning Nanodegree to help students and professionals gain a deeper understanding of reinforcement learning. Augmented Reality with Project MARS Unity has also announced their Project MARS, a Mixed and Augmented Reality studio, that will be provided as a Unity extension. This studio will require almost little-to-no custom coding and will allow game developers to build AR and MR applications that intelligently interact with any real-world environment, with little-to-no custom coding. Unite Berlin - AR Keynote Reel MARS will include abstract layers for object recognition, location, and map data. It will have sample templates with simulated rooms, for testing against different environments, inside the editor.  AR-specific gizmos will be provided to easily define spatial conditions like plane size, elevation, and proximity without requiring code or precise measurements. It will also have elements such as face masks, to avatars, to entire rooms of digital art. Project MARS will be coming to Unity as an experimental package later this year. Unity has also unveiled a Facial AR Remote Component. Powered by Augmented Reality, this component can perform and capture animated characters, allowing filmmakers and CGI developers to shoot CG content with body movement, just like you would with live action. Kinematica - Machine Learning powered Animation system Unity also showcased their AI research by announcing Kinematica, an all-new ML-powered animation system. Kinematica overpowers traditional animation systems which generally require animators to explicitly define transitions. Kinematica does not have any superimposed structure, like graphs or blend trees. It generates smooth transitions and movements by applying machine learning to any data source. Game developers and animators no longer need to manually map out animation graphs. Unite Berlin 2018 - Kinematica Demo Kinematica decides in real time how to combine data clips from a single library into a sequence that matches the controller input, the environment content, and the gameplay requests. As with Project MARS, Kinematica will also be available later this year as an experimental package. New Prefab workflows The entire Prefab systems have been revamped with multiple improvements. This improved Prefab workflow is now available as a preview build. New additions include Prefab Mode, prefab variance, and nested prefabs. Prefab Mode allows faster, efficient, and safer editing of Prefabs in an isolated mode, without adding them to the actual scene. Developers can now edit the model prefabs, and the changes are propagated to all prefab variants. With Nested prefabs, teams can work on different parts of the prefab and then come together for the final asset. Predictive Personalized Placements Personalized placements bring the best of both worlds for players and the commercial business. With this new feature, game developers can create tailor-made game experiences for each player. This feature runs on an engine which is powered by predictive analytics. This prediction engine determines what to show to each player based on what will drive the highest engagement and lifetime value. This includes ad, an IAP promotion, a notification of a new feature, or a cross-promotion. And the algorithm will only get better with time. These were only a select few of the announcements presented in Unity Berlin Keynote. You can watch the full video on YouTube. Details on other sessions, seminars, and activities are available on the Unite website. GitHub for Unity 1.0 is here with Git LFS and file locking support Unity announces a new automotive division and two-day Unity AutoTech Summit Put your game face on! Unity 2018.1 is now available
Read more
  • 0
  • 0
  • 4173