To solve any equation, we usually have a lot of methods available to us. Similarly, for optimization (learning the parameters of a neural network), there have been lots of methods that have been open sourced by various researchers, but gradient descent has been proven to be a universal method that can work for every scenario. If we wish to go to a specific type of neural network problem, then it's better to explore different optimization techniques that might be suitable for our task.
In this chapter, we looked at two of the most famous approaches for one-shot learning optimization: MAML and LSTM meta-learner. We learned how MAML approaches the one-shot learning problem by optimizing our initial parameter setting so that one or a few steps of gradient descent on a few data points can lead to better generalization. We also explored the insights given by LSTM meta-learner...