Exploration in Approximate Hyper-State Space for Meta Reinforcement Learning

Abstract

To rapidly learn a new task, it is often essential for agents to explore efficiently – especially when performance matters from the first timestep. One way to learn such behaviour is via meta-learning. Many existing methods however rely on dense rewards for meta-training, and can fail catastrophically if the rewards are sparse. Without a suitable reward signal, the need for exploration during meta-training is exacerbated. To address this, we propose HyperX, which uses novel reward bonuses for meta-training to explore in approximate hyper-state space (where hyper-states represent the environment state and the agent’s task belief). We show empirically that HyperX meta-learns better task-exploration and adapts more successfully to new tasks than existing methods.

Publication
In 38th International Conference on Machine Learning, 2021
Cong Lu
Cong Lu
DPhil Student

My research interests span deep reinforcement learning, meta-learning and Bayesian Optimisation.