Lennart Wachowiak

Lennart Wachowiak

PhD Candidate at the Centre for Doctoral Training in Safe and Trusted Artificial Intelligence

King's College London

Imperial College London

Biography

I am a Cambridge ERA:AI research fellow working on human–AI interaction and decision-making. Previously, I was a PhD candidate with the CDT in Safe and Trusted AI at King’s College London and Imperial College London, and a student researcher at Google DeepMind, where I was part of the Gemini Robotics team.

Decision-Making. My current research focuses on how AI systems can help people make better decisions, especially in settings where other AIs may try to manipulate users into harmful choices. In particular, I study how an advisory AI can detect and counter manipulative or adversarial influence.

Explainability. During my PhD, I investigated how robots can learn to proactively explain their decisions and assist people in collaborative scenarios. I trained machine learning models to predict when explanations are needed and what they should be about by interpreting the user’s non-verbal behaviour and contextual task cues.

Cognitive Linguistics. I love interdisciplinary work, learning about different perspectives, and combining them to get a rich understanding of a phenomenon. Beyond my work in human–AI interaction, I study conceptual metaphors and image schemas in text corpora using tools from natural language processing and theories from cognitive linguistics.

Download my CV.

Work
  • Postdoctoral Researcher, Incoming

    University of Oxford

  • Technical AI Safety Fellow, 2026

    Cambridge ERA:AI

  • Student Researcher, 2024-2025

    Google DeepMind

Education
  • PhD in Safe and Trusted AI, 2021-2026

    King's College London, Imperial College London

  • MSc in Cognitive Science, 2018-2021

    Joint Degree with University of Vienna, University of Ljubljana, Comenius University Bratislava, Eötvös Loránd University Budapest

  • BSc in Computer Science, 2014-2017

    University of Paderborn

Publications

Quickly discover relevant content by filtering publications.
(2025). Predicting When and What to Explain From Multimodal Eye Tracking and Task Signals. Transactions on Affective Computing.

PDF Cite Code Project DOI

(2024). A Time Series Classification Pipeline for Detecting Interaction Ruptures in HRI Based on User Reactions. ICMI.

PDF Cite Code Project DOI

(2024). A Taxonomy of Explanation Types and Need Indicators in Human–Agent Collaborations. Journal of Social Robotics.

PDF Cite Dataset Project DOI

(2024). When Do People Want an Explanation from a Robot?. HRI.

PDF Cite Code Project DOI

(2023). The Image Schema VERTICALITY: Definitions and Annotation Guidelines. Image Schema Day.

PDF Cite Project

(2023). A Survey of Evaluation Methods and Metrics for Explanations in Human–Robot Interaction (HRI). ICRA Explainable Robotics Workshop.

PDF Cite Project

(2022). Analysing Eye Gaze Patterns during Confusion and Errors in Human–Agent Collaborations. RO-MAN.

PDF Cite Code Project Video DOI

(2022). Towards Autonomous Collaborative Robots that Adapt and Explain. ICRA Workshop on Prediction and Anticipation Reasoning in HRI.

PDF Cite Project Poster

(2022). Extracting Terminological Concept Systems from Natural Language Text. Cognitive Technologies.

PDF Cite Project

(2021). CogALex 2.0: Impact of Data Quality on Lexical-Semantic Relation Prediction. NeurIPS DCAI Workshop.

PDF Cite Code Dataset Project Video

(2021). Multilingual Extraction of Terminological Concept Systems. Workshop on Deep Learning and Neural Approaches for Linguistic Data.

PDF Cite Project

(2021). Towards Learning Terminological Concept Systems from Multilingual Natural Language Text. Conference on Language, Data and Knowledge (awarded best student paper).

PDF Cite Code Project Video DOI

(2020). CogALex-VI Shared Task: Transrelation - A Robust Multilingual Language Model for Multilingual Relation Identification. Workshop on the Cognitive Aspects of the Lexicon (1st place in CogALex shared task).

PDF Cite Code Project

Contact