The future of artificial intelligence
Chapter 2, Classifying Handwritten Digits with a Feedforward Network presented diverse optimization techniques (Adam, RMSProp, and so on) and mentioned second order optimization techniques. A generalization would be to also learn the update rule:
Here, is the parameter of the optimizer to learn from different problem instances, a sort of generalization or transfer learning of the optimizer from problems to learn better on new problems. The objective to minimize under this learning to learn or meta-learning framework has to optimize the time to learn correctly and, consequently, be defined on multiple timesteps:
Where:
A recurrent neural network can be used as the optimizer model . Such a generalization technique that solves a multi-objective optimization problem improves the learning rate of the neural networks in general.
Researchers have been looking one step further, searching for general artificial intelligence, which aims for a human-level skill...