I am a final year DPhil student supervised by Prof. Michael A. Osborne and Prof. Yee Whye Teh. I am a member of the MLRG and OxCSML groups. My research interests span deep reinforcement learning, generative modeling, and Bayesian Optimisation.
I am particularly interested in offline reinforcement learning with work in generalization to unseen tasks, uncertainty quantification for offline world models, and learning from pixels. I am also extremely excited by recent work training RL agents using synthetic data! Please do feel free to reach out!
- [11/2023] 2 papers accepted at NeurIPS workshops!
- [10/2023] Excited to rejoin Waymo Research on the Sim Agents team!
- [9/2023] New work: Synthetic Experience Replay - arbtrarily upsampling an agent's experiences accepted at NeurIPS. See you in New Orleans!
- [8/2023] Challenges and opportunities in offline reinforcement learning from visual observations accepted at TMLR.
- [12/2022] Internship project at MSR Cambridge on automated game testing using Go-Explore accepted at IEEE ToG.
- [8/2022] I am interning at Waymo Research supervised by Max Igl.
- [6/2022] Our work on offline RL from pixels won an Outstanding Paper Award at L-DOD.
- [5/2022] Our work on PBT across architectures and hyperparameters was accepted at AutoML.
- [3/2022] I am interning at MSR Cambridge in the Deep RL for Games group.
- [2/2022] Our work on revisiting design choices in offline model-based reinforcement learning received a Spotlight at ICLR!