Researchers Develop Framework to Give Robots Social Skills

Researchers at the Massachusetts Institute of Technology (MIT) have developed a control framework to give robots social skills. The framework enables machines to understand what it means to help or hinder each other, as well as to learn to perform social behaviors on their own. 

A robot watches its companion in a simulated environment before guessing what task it wants to accomplish. It then helps or hinders the other robot based on its own goals. 

The researchers also demonstrated that their model creates realistic and predictable social interactions. When human viewers were shown videos of the simulated robots interacting with one another, they agreed with the model about which social behavior was occurring.

By enabling robots to exhibit social skills, we can achieve more positive human-robot interactions. The new model could also enable scientists to measure social interactions quantitatively. 

Boris Katz is the principal research scientist and head of the InfoLab Group in the Computer Science and Artificial Intelligence Laboratory (CSAIL), as well as a member of the Center for Brains, Minds, and Machines (CBMM). 

“Robots will live in our world soon enough and they really need to learn how to communicate with us on human terms. They need to understand when it is time for them to help and when it is time for them to see what they can do to prevent something from happening. This is very early work and we are barely scratching the surface, but I feel like this is the first very serious attempt at understanding what it means for humans and machines to interact socially,” says Katz.

The research also included co-lead author Ravi Tejwani, a research assistant at CSAIL; co-lead author Yen-Ling Kuo, a CSAIL PhD student; Tianmin Shu, a postdoc in the Department of Brain and Cognitive Sciences; and senior author Andrei Barbu, a research scientist at CSAIL. 

Studying Social Interactions

The researchers created a simulated environment where robots pursue physical and social goals as they navigate around a two-dimensional grid, which enabled the team to study social interaction.

The robots were given physical and social goals. A physical goal relates to the environment, while a social goal could be something like a robot guessing what another is trying to do before basing its own actions on that prediction. 

The model is used to specify what a robot’s physical goals are, what its social goals are, and how much emphasis should be placed on one over the other. If the robot completes actions that get it closer to its goal, it is rewarded. If the robot tries to assist its companion, it adjusts its reward to match that of the other. If the robot is trying to hinder the other, it adjusts its reward accordingly. An algorithm decides which actions a robot should take, and it uses the reward system to guide it to carry out physical and social goals.

“We have opened a new mathematical framework for how you model social interaction between two agents. If you are a robot, and you want to go to location X, and I am another robot and I see that you are trying to go to location X, I can cooperate by helping you get to location X faster. That might mean moving X closer to you, finding another better X, or taking whatever action you had to take at X. Our formulation allows the plan to discover the ‘how’; we specify the ‘what’ in terms of what social interactions mean mathematically,” says Tejwani.

The researchers use the mathematical framework to define three types of robots. A level 0 robot has only physical goals, while a level 1 robot has both physical and social goals but assumes all others only have physical goals. This means level 1 robots take actions based on the physical goals of others, such as helping or hindering. A level 2 robot assumes others have social and physical goals, and these robots can take more sophisticated actions. 

Testing the Model

The researchers found that their model agreed with what humans thought about the social interactions that were occurring in each frame. 

“We have this long-term interest, both to build computational models for robots, but also to dig deeper into the human aspects of this. We want to find out what features from these videos humans are using to understand social interactions. Can we make an objective test for your ability to recognize social interactions? Maybe there is a way to teach people to recognize these social interactions and improve their abilities. We are a long way from this, but even just being able to measure social interactions effectively is a big step forward,” Barbu says.

The team is now working on developing a system with 3D agents in an environment that allows more types of interactions. They also want to modify the model to include environments where actions can fail, and they plan on incorporating a neural network-based robot planner into the model. Lastly, they’ll look to run an experiment to collect data about the features humans use to determine if two robots are engaging in a social interaction.

 

Credit: Source link

Comments are closed.