I am a Cambridge ERA:AI research fellow working on human–AI interaction and decision-making. Previously, I was a PhD candidate with the CDT in Safe and Trusted AI at King’s College London and Imperial College London, and a student researcher at Google DeepMind, where I was part of the Gemini Robotics team.
Decision-Making. My current research focuses on how AI systems can help people make better decisions, especially in settings where other AIs may try to manipulate users into harmful choices. In particular, I study how an advisory AI can detect and counter manipulative or adversarial influence.
Explainability. During my PhD, I investigated how robots can learn to proactively explain their decisions and assist people in collaborative scenarios. I trained machine learning models to predict when explanations are needed and what they should be about by interpreting the user’s non-verbal behaviour and contextual task cues.
Cognitive Linguistics. I love interdisciplinary work, learning about different perspectives, and combining them to get a rich understanding of a phenomenon. Beyond my work in human–AI interaction, I study conceptual metaphors and image schemas in text corpora using tools from natural language processing and theories from cognitive linguistics.
Download my CV.
Postdoctoral Researcher, Incoming
University of Oxford
Technical AI Safety Fellow, 2026
Cambridge ERA:AI
Student Researcher, 2024-2025
Google DeepMind
PhD in Safe and Trusted AI, 2021-2026
King's College London, Imperial College London
MSc in Cognitive Science, 2018-2021
Joint Degree with University of Vienna, University of Ljubljana, Comenius University Bratislava, Eötvös Loránd University Budapest
BSc in Computer Science, 2014-2017
University of Paderborn
The research topics I investigated. Click on one for further information and corresponding publications.