When, What, and How to Explain?

Many mechanisms to explain robot behavior are developed in a vacuum without considering the context of a human–robot interaction. However, when collaborating with a person, additional questions arise concerning explainability: When do explanations become necessary, and what robot modules should they be about?

To answer these questions, I conduct studies establishing when users want explanations from a robot, resulting in a set of scenarios such as robot errors, inabilities, and decision uncertainties. Building on these insights, I develop neurosymbolic agent architectures that leverage foundation models and symbolic planners to provide the right explanation at the right time. Additionally, I explore the use of non-verbal user cues, such as eye gaze, to detect confusion and its source, thereby enabling robots to generate user-centered explanations proactively.

This approach ensures that robot explanations are both relevant and timely, enhancing trust calibration and team performance.

Lennart Wachowiak
Lennart Wachowiak
PhD Student at the Centre for Doctoral Training in Safe and Trusted Artificial Intelligence

I am currently researching when and what to explain as a robot by interpreting the interaction context in combination with social cues of the collaborator.