Publications

Conversational Error Analysis in Human-Agent Interaction

ACM International Conference on Intelligent Virtual Agents (IVA)

Publication date: October 20, 2020

Deepali Aneja, Daniel McDuff, Mary Czerwinski

Adobe Research thumbnail image

Conversational Agents (CAs) present many opportunities for changing how we interact with information and computer systems in a more natural, accessible way. Building on research in machine learning and HCI, it is now possible to design and test multi-turn CA that is capable of extended interactions. However, there are many ways in which these CAs can "fail" and fall short of human expectations. We systematically analyzed how five different types of conversational errors impacted perceptions of an embodied CA. Not all errors negatively impacted the perceptions of the agent. Repetitions by the agent and clarifications by the human significantly decreased the perceived intelligence and anthropomorphism of the agent. Turn-taking errors significantly decreased the likability of the agent. However, coherence errors significantly positively increased likability, and these errors were also associated with positive valence via facial expressions, suggesting that the users found them amusing. We believe this work is the first to identify that turn-taking, repetition, clarification, and coherence errors directly affect users’ acceptance of an embodied CA, and are worth taking note by designers of such systems during dialog configuration. We release the Agent Conversational Error (ACE) dataset, a set of transcripts and error annotations of human-agent conversations. The dataset can be found at the GITHUB link: https://github.com/deepalianeja/ACE-dataset

Learn More