Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Hands-On Meta Learning with Python

You're reading from   Hands-On Meta Learning with Python Meta learning using one-shot learning, MAML, Reptile, and Meta-SGD with TensorFlow

Arrow left icon
Product type Paperback
Published in Dec 2018
Publisher Packt
ISBN-13 9781789534207
Length 226 pages
Edition 1st Edition
Languages
Arrow right icon
Author (1):
Arrow left icon
Sudharsan Ravichandiran Sudharsan Ravichandiran
Author Profile Icon Sudharsan Ravichandiran
Sudharsan Ravichandiran
Arrow right icon
View More author details
Toc

Table of Contents (12) Chapters Close

Preface 1. Introduction to Meta Learning 2. Face and Audio Recognition Using Siamese Networks FREE CHAPTER 3. Prototypical Networks and Their Variants 4. Relation and Matching Networks Using TensorFlow 5. Memory-Augmented Neural Networks 6. MAML and Its Variants 7. Meta-SGD and Reptile 8. Gradient Agreement as an Optimization Objective 9. Recent Advancements and Next Steps 10. Assessments 11. Other Books You May Enjoy

What this book covers

Chapter 1, Introduction to Meta Learning, helps us to understand what meta learning is and covers the different types of meta learning. We will also learn how meta learning uses few-shot learning by learning from a few data points. We will then see how to become familiar with gradient descent. Later in the chapter, we will see optimization as a model for the few shot learning setting.

Chapter 2, Face and Audio Recognition Using Siamese Networks, starts by explaining what siamese networks are and how siamese networks are used in the one-shot learning setting. We will look at the architecture of a siamese network and some of the applications of a siamese network. Then, we will see how to use the siamese networks to build face and audio recognition models.

Chapter 3, Prototypical Networks and Their Variants, explains what prototypical networks are and how they are used in the few shot learning scenario. We will see how to build a prototypical network to perform classification on an omniglot character set. Later in the chapter, we will look at different variants of prototypical networks, such as the Gaussian prototypical networks and semi-prototypical networks.

Chapter 4, Relation and Matching Networks Using TensorFlow, helps us to understand the relation network architecture and how relation network is used in one-shot, few-shot, and zero-shot learning settings. We will then see how to build a relation network using TensorFlow. Next, we will learn about the matching network and its architecture. We will also explore full contextual embeddings and how to build a matching network using TensorFlow.

Chapter 5, Memory-Augmented Neural Networks, covers what neural Turing machines (NTMs) are and how they make use of external memory for storing and retrieving information. We will look at different addressing mechanisms used in NTMs and then we will learn about memory augmented neural networks and how they differ from the NTM architecture.

Chapter 6, MAML and Its Variants, deals with one of the popular meta learning algorithms, called model-agnostic meta learning (MAML). We will explore what MAML is and how it is used in supervised and reinforcement learning settings. We will also see how to build MAML from scratch. Then, we will learn about adversarial meta learning and CAML, which is used for fast context adaptation in meta learning.

Chapter 7, Meta-SGD and Reptile, explain how meta-SGD is used to learn all the ingredients of gradient descent algorithms, such as initial weights, learning rates, and the update direction. We will see how to build meta-SGD from scratch. Later in the chapter, we will learn about the reptile algorithm and see how it serves as an improvement over MAML. We will also see how to use the reptile algorithm for sine wave regression.

Chapter 8, Gradient Agreement as an Optimization Objective, covers how we can use gradient agreement as an optimization objective in the meta learning setting. We will learn what gradient agreement is and how it can enhance meta learning algorithms. Later in the chapter, we will learn how to build a gradient agreement algorithm from scratch.

Chapter 9, Recent Advancements and Next Steps, starts by explaining task-agnostic meta learning, and then we will see how meta learning is used in an imitation learning setting. Then, we will learn how we can apply MAML in an unsupervised learning setting using the CACTUs algorithm. Then, we will explore a deep meta learning algorithm called learning to learn in the concept space.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image