Researchers want to teach robot to recognize human intentions so they can interact better with them. Especially with robots that are supposed to help older people in everyday life in the future, this ability is particularly important.
A woman passes a note or other item to a man, which the man then takes. This everyday action may be very simple for humans – but not for robots. In this action, numerous mechanisms take place at the same time: where does the other person currently look? Is the person just accessible? Should the person take the item or just look at each other? This and many other information is processed by a human being at lightning speed and almost automatically. A person perceives the signals of his counterpart and behaves accordingly.
Sebastian Robert and his colleagues from the Frauenhofer Institute for Optronics, Systems Engineering and Image Evaluation are currently working on teaching this interpersonal sensitivity to robots, so that they can interact better with people in everyday situations.
A robot needs some basic human interaction skills to be useful and meaningful to assist people in the home or to take on a variety of tasks in Hospitals. For example, items must be safely received or even served. “It’s not enough for a robot to simply pick up an object through a camera,” explains Robert. “In order to be able to behave in accordance with expectation, ie interpersonal compatibility, a robot must also recognize what its human counterpart is currently paying attention to and understand what intentions are being pursued.”
The project called ASARob deals exactly with this topic. The researchers are trying to expand the software of mobile robots in such a way that they can capture the state of attention of people and, if necessary, react appropriately. For this purpose, the researchers use the Care-O-bot 4 mobile robots developed by the Stuttgart Frauenhofer Institute for Production Engineering and Automation IPA and Unity Robotics GmbH. This robot has been specially designed to interact with and support people’s everyday situations and, thanks to its modularity, can be quickly adapted to different tasks.
Robert goes on to explain: “By” attention “we mean an allocation of consciousness resources to certain environmental perceptions, ie a mental state, which provides visual information such as the direction of vision, head rotation and posture of a person.” Supplementary linguistic utterances can then provide additional contextual information. Based on all this information, ASARob should be able to assess the state of attention of a person in the future.
The next step is the behavior of the robot resulting from the assessment. This too is part of the project. In the end, the robot should be able to intuitively interact with people and support them in everyday life. In addition to gestures, this also includes linguistic communication in the form of complex dialogues.