Summary
We started this chapter by understanding what meta learning is. We learned that with meta learning, we train our model on various related tasks with a few data points, such that for a new related task, our model can make use of the learning obtained from the previous tasks.
Next, we learned about a popular meta-learning algorithm called MAML. In MAML, we sample a batch of tasks and for each task Ti in the batch, we minimize the loss using gradient descent and get the optimal parameter . Then, we update our randomly initialized model parameter by calculating the gradients for each of the new tasks Ti with the model parameterized as .
Moving on, we learned about HRL, where we decompose large problems into small subproblems in a hierarchy. We also looked into the different methods used in HRL, such as state-space decomposition, state abstraction, and temporal abstraction. Next, we got an overview of MAXQ value function decomposition, where we decompose the...