When, What, and How to Explain?

Although more and more mechanisms are developed to explain robotic behavior and the output of different submodules, these mechanisms are usually developed in a vacuum without considering the context of a human–agent interaction. However, when collaborating with a person, additional questions arise concerning explainability: When do explanations become necessary, and what should they be about?

To answer these questions, we conduct studies establishing when users want a robot to give an explanation, resulting in a set of scenarios such as the robot making errors, being unable to fulfill a request, or being uncertain about the correct action plan. We use these findings to develop agent architectures suited for selecting the right explanation at the right time.

Furthermore, we want to explore how we can leverage the user’s non-verbal cues during collaborations to generate timely, user-centered explanations. Gaze signals, for instance, can hint at user confusion and the source of confusion. Combined with task-related data, affective cues can help identify when the world models of user and agent diverge, and communication becomes imperative.

Lennart Wachowiak
Lennart Wachowiak
PhD Student at the Centre for Doctoral Training in Safe and Trusted Artificial Intelligence

I am currently researching when and what to explain as a robot by interpreting the interaction context in combination with social cues of the collaborator.