Meta networks, as the name suggests, are a form of the model-based meta-learning approach. In usual deep-learning methods, weights of neural networks are updated by stochastic gradient descent, which takes a lot of time to train. As we know, the stochastic gradient descent approach means that we will consider each training data point for a weight update, so if our batch size is 1, this will lead to a very slow optimization of the model—in other words, a slow weights update.
Meta networks suggest a solution to the problem of slow weights by training a neural network in parallel to the original neural network to predict the parameters of an objective task. The generated weights are called fast weights. If you recall, LSTM meta-learners (see Chapter 4, Optimization-Based Methods) are also built on similar grounds to predict parameter updates of...