Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Google strides forward in deep learning: open sources Google Lucid to answer how neural networks make decisions

Save for later
  • 2 min read
  • 07 Mar 2018

article-image
In an attempt to deepen neural network interpretability, Google has released Google Lucid, a neural network visualization library along with publishing an article “The Building Blocks of Interpretability”, which answers one of the most popular questions in Deep Learning: how do neural networks make decisions?

Google Lucid is a neural network visualization library building off Google’s work on DeepDream. You may remember DeepDream as Google’s earlier attempt to visualize how neural networks understand images, which led to the creation of psychedelic images. Google Lucid adds feature visualizations to create more artistic DeepDream images. It is basically a collection of infrastructure and tools for research in neural network interpretability. In particular, it provides state of the art implementations of feature visualization techniques, and flexible abstractions that make it very easy to explore new research directions.

To add more flexibility and ease of work, Google is also releasing colab notebooks. These notebooks make it extremely easy to use Lucid to reproduce visualizations. Just open the notebook and click a button to run code without worrying about setup requirements.

To further make things exciting, Google’s new Distill article, titled, “The Building Blocks of Interpretability,” shows how feature visualization in combination with other interpretability techniques allows a clear cut view of the neural network. This is helpful to see how a neural network makes some decisions at a point, and how they influence the final output. For example, Google says, “we can see things like how a network detects a floppy ear, and then that increases the probability it gives to the image being a “Labrador retriever” or “beagle”.

The article explores techniques for understanding which neurons fire in the network by attaching visualizations to each neuron, almost a kind of MRI for neural networks. It can also zoom out and show how the entire image was perceived at different layers. Thus detecting very simple combinations of edges, to rich textures and 3d structure, to high-level structures.

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime

The purpose of this research, Google says is to “address one of the most exciting questions in Deep Learning: how do neural networks do what they do?” However, it adds, “This work only scratches the surface of the kind of interfaces that we think it’s possible to build for understanding neural networks. We’re excited to see what the community will do.

You can read the entire article on Distill.