Robots aren’t perfect, and on occasion they are faced with a task they cannot complete. To help robots communicate their inability, researchers at Cornell University and the University of California at Berkeley developed a method that automatically generates clear, expressive robotic motions.
“When interacting with robots, it is important for humans to have accurate expectations of robot capabilities,” says Minae Kwon, a researcher of the study, via TechXplore. “One way to set accurate expectations is to understand what robots are incapable of doing and why.”
According to the researchers, many robot failures are uninformative in nature. For example, the machine may stop suddenly, or will refuse to even start the task at hand.
“We wanted to find a way in which robots could more intelligently communicate their incapabilities (i.e., what they are trying to do and why it will fail) even before a failure happens,” says Kwon via TechXplore. “Specifically, we focused on incapabilities related to motion planning tasks (e.g. lifting a cup, pushing a door), as we wanted to solve this problem using expressive motion.”
Kwon, along with colleague Sandy Huang and their advisor Anca Dragan, used an approach that generated an attempt motion. This motion conveys to developers what the robot is trying to accomplish, as well as its reason for failure.
As seen in the image below, the robot is trying to execute a motion of lifting the cup from position Xf to Xd. In this example, the robot is communicating this attempt by moving its elbow, however, it fails to execute the entire motion since the cup is too heavy.
“We think it’s important that people were not only able to recognize the robot’s intended goal and the cause of incapability more clearly compared to other approaches, but that our motions also created a positive image of the robot,” says Kwon via TechXplore. “For instance, people were more willing to help the robot and collaborate with it. We hope that these positive implications for human-robot collaboration will help to improve the way we interact with robots.”
Next, the team hopes to continue advancing human-robot collaboration by generating motions for a broader range of task failures, such as perception failures.