Gradient-based meta-reinforcement learning
Gradient-based meta-RL methods propose improving the policy by continuing the training at test time so that the policy adapts to the environment it is applied in. The key is that policy parameters right before the adaptation, , are set in such a way that the adaptation takes place in just a few shots.
Tip
Gradient-based meta-RL is based on the idea that some initializations of policy parameters enable learning from very little data during adaptation. The meta-training procedure aims to find such an initialization.
A specific approach in this branch is called model-agnostic meta-learning (MAML), which is a general meta-learning method that can also be applied in RL. MAML trains the agent for a variety of tasks to figure out a good that facilitates adaptation and learning from few shots.
Let's see how you can use RLlib for this.
RLlib implementation
MAML is one of the agents implemented in RLlib and can be easily used...