Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

AmoebaNets: Google’s new evolutionary AutoML

Save for later
  • 2 min read
  • 16 Mar 2018

article-image
In order to detect objects within an image, artificial neural networks require careful design by experts over years of difficult research. They later address one specific task, such as to find what's in a photograph, to call a genetic variant, or to help diagnose a disease. Google believes one approach to generate these ANN architectures is through the use of evolutionary algorithms. So, today Google introduced AmoebaNets, an evolutionary algorithm that achieves state-of-the-art results for datasets such as ImageNet and CIFAR-10.

Google offers AmoebaNets as an answer to questions such as,

  • By using the computational resources to programmatically evolve image classifiers at unprecedented scale, can one achieve solutions with minimal expert participation? How good can today's artificially-evolved neural networks be?

These questions were addressed through the two papers:

  1. Large-Scale Evolution of Image Classifiers,” presented at ICML 2017. In this paper, the authors have set up an evolutionary process with simple building blocks and trivial initial conditions. The idea was to "sit back" and let evolution at scale do the work of constructing the architecture.
  2. Regularized Evolution for Image Classifier Architecture Search (2018). This paper includes a scaled up computation using Google's new TPUv2 chips. This combination of modern hardware, expert knowledge, and evolution worked together to produce state-of-the-art models on CIFAR-10 and ImageNet, two popular benchmarks for image classification.

One important feature of the evolutionary algorithm (AmoebaNets) that the team used in their second paper is a form of regularization, which means:

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £16.99/month. Cancel anytime
  • Instead of letting the worst neural networks die, they remove the oldest ones — regardless of how good they are. This improves robustness to changes in the task being optimized and tends to produce more accurate networks in the end.
  • Since weight inheritance is not allowed, all networks must train from scratch. Therefore, this form of regularization selects for networks that remain good when they are re-trained.
  • These models achieve state-of-the-art results for CIFAR-10 (mean test error = 2.13%), mobile-size ImageNet (top-1 accuracy = 75.1% with 5.1 M parameters) and ImageNet (top-1 accuracy = 83.1%).

Read more about AmoebaNets on Google Research Blog