Artificial conversationalists mastering the handling of human disruptions
Johns Hopkins University researchers have created a groundbreaking system that could make social robots more effective at handling user interruptions in real-time. This innovation, presented at the Robotics: Science and Systems conference held in Los Angeles from June 21 to 25, aims to improve interactions between humans and robots, particularly in settings like healthcare and education where fluid conversation flow is crucial [1].
The system, which uses large language models (LLMs), is designed to predict a human interrupter's intent and adapt its conversational strategies accordingly. By analyzing the nature and context of interruptions during conversations with social robots, the robot can classify the type of interruption and select an appropriate conversational behavior to manage the interruption seamlessly [1][3].
For instance, in agreeing or assisting interruptions, the robot acknowledges this, nods, and resumes speaking. For more disruptive interruptions, the robot can either hold the floor to summarize its remaining points before yielding to the human user or stop talking immediately [1]. The system successfully handled interruptions 93.69% of the time, according to the researchers [1].
The team also recommends exploring non-verbal interruptions and investigating interruption handling in longer or multi-session interactions with multiple users. They suggest that aligning a robot's role and task context with its interruption handling behavior is important [1].
In a user study, the social robot equipped with the interruption handling system was tested. Participants in the study didn't always appreciate the robot holding the floor to handle interruptions, perceiving its role as assistive rather than collaborative [1].
The system accurately classified the underlying intention behind 88.78% of interruptions. For clarification interruptions, the robot supplies the necessary clarification before continuing [1].
The key innovation is the use of LLMs for intent prediction of interrupters, which guides the robot's strategy in managing interruptions—this results in more human-like, context-appropriate responses that facilitate smoother interactive experiences between humans and robots [1][3]. This research was supported by the National Science Foundation.
In summary, the new system developed by Johns Hopkins University researchers could significantly improve the way social robots handle interruptions, leading to more natural and effective interactions. The study was posted in the Science+Technology category, tagged under artificial intelligence.
The new system developed by Johns Hopkins University researchers uses artificial intelligence (artificial-intelligence) and technology, specifically large language models (LLMs), to predict a human interrupter's intent and adapt its conversational strategies accordingly. This innovation, designed for social robots, could particularly benefit settings like healthcare and education where fluid conversation flow is crucial (health, education). By improving interactions between humans and robots, the research has the potential to advance technology in these sectors. The study, presented at the Robotics: Science and Systems conference, was supported by the National Science Foundation and tagged under artificial-intelligence in the Science+Technology category.