I am a postdoctoral research fellow at the University of British Columbia and the Vector Institute supervised by Prof. Jeff Clune. I am interested in developing autonomous agents that are safe, curious and can learn in an open-ended manner, particularly driven by recent advances in large language or multimodal models together with deep reinforcement learning.

I previously received my PhD at the University of Oxford supervised by Prof. Michael A. Osborne and Prof. Yee Whye Teh. During the PhD, I was particularly interested in offline reinforcement learning with work in generalization to unseen tasks, uncertainty quantification for offline world models, and learning from pixels. Our most recent work concerned efficiently training RL agents using synthetic data! Please do feel free to reach out!

You can find my PhD thesis here!

Recent News