Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Unity releases ML-Agents v0.3: Imitation Learning, Memory-Enhanced Agents and more

Save for later
  • 2 min read
  • 16 Mar 2018

article-image
The Unity team has released the version 0.3 of their anticipated toolkit ML-Agents. The new release is jam-packed with features on the likes of Imitation Learning, Multi-Brain Training, On-Demand Decision-Making, and Memory-Enhanced Agents.

Here’s a quick look at what each of these features brings to the table:

Behavioral cloning, an imitation learning algorithm

ML-Agents v0.3 uses imitation learning for training agents. Imitation Learning uses demonstrations of the desired behavior in order to provide a learning signal to the agents. For v0.3, the team uses Behavioral Cloning as the choice of imitation learning algorithm. This works by collecting training data from a teacher agent, and then simply using it to directly learn a behavior.

Multi-Brain training

Using Multi-Brain Training, one can train more than one brain at a time, with their separate observation and action space. At the end of training, there is only one binary (.bytes) file, which contains one neural network model per brain.

On-Demand Decision-Making

Agents ask for decisions in an on-demand fashion, rather than making decisions every step or every few steps of the engine. Users can enable and disable On-Demand Decision-Making for each agent independently with the click of a button!

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime

Learning under partial observability

The unity team has included two methods for dealing with partial observability within learning environments through Memory-Enhanced Agents.

  • The first memory enhancement is Observation-Stacking. This allows an agent to keep track of up to the past ten previous observations within an episode, and to feed them all to the brain for decision-making.
  • The second form of memory is the inclusion of an optional recurrent layer for the neural network being trained. These Recurrent Neural Networks (RNNs) have the ability to learn to keep track of important information over time in a hidden state.

Apart from these features, there is an addition of a Docker-Image, changes to API Semantics and a major revamp of the documentation. All this to make setup and usage simpler and more intuitive.  Users can check the GitHub page to download the new version and learn all the details on the release page.