MIT taught robots to help and hinder each other

New machine learning system helps robots understand and perform social actions

In a simulated environment, a robot observes a companion, guesses its purpose, and then helps or hinders another robot based on its goals. The researchers showed that the model performed realistic actions: people generally agreed with the model about what type of behavior was demonstrated. We share details about the development under the cut, while we have course begins on deep and machine learning.


MIT researchers have incorporated social interaction into the framework of robotics, which allows simulated machines to understand what it means to help or hinder each other and learn how to perform social actions on their own.

Incorporating social skills in robots can lead to smoother, more positive interactions. For example, a robot can use these capabilities to help create a better environment for seniors. The new model could allow scientists to quantify social interactions. It can help psychologists study autism or analyze the effects of antidepressants.

“Robots will be living in our world pretty soon, and they really need to learn how to communicate with us on a human basis. They need to understand when it is time to help them, and when – to see what they can do to prevent a certain event.

We have barely started the development of the field, but it seems to me that this is the first very serious attempt to understand what social interaction means for people and machines, ”says Boris Katz, head of the InfoLab group at the Laboratory of Computer Science and Artificial Intelligence at the Massachusetts Institute of Technology (CSAIL) and a member of the Center for Brain, Mind and Machines (CBMM). The research results will be presented at a conference on training robots in November.

Social simulation

To study social interactions, the researchers created a simulation of an environment in which robots move along a two-dimensional grid to pursue physical and social goals. The physical target is related to the environment.

The physical goal of a robot, for example, might be to navigate to a tree at a certain point. A social goal involves guessing what another robot is trying to do and then choosing actions based on the score, for example, one robot can help another to water a tree.

Researchers use their model to determine the physical goals of the robot, its social goals, and how much attention should be paid to one goal and how much to the other.

The robot is rewarded for actions that bring it closer to achieving its goals.

  • If a robot tries to help its companion, it changes its reward in accordance with the reward of the other robot.

  • If he tries to interfere, he adjusts his reward to the opposite.

The planner, that is, the algorithm that decides what actions the robot should take, uses this constantly updated reward to guide the robot towards a combination of physical and social goals.

“We have discovered a new mathematical basis for modeling social interaction between two agents. If you want to get to point X, I am another robot and I see your attempt to reach the goal, then I can help you.

Move X closer to you, find another place X is better or do something that you in place X should do.

Our formulation allows us to discover the “how”; we are specifying “what” in terms of the mathematical meaning of social interactions, ”Tejwani says.


The combination of physical and social goals is important to create realistic interactions, says Barbu: helping each other have limits to what is allowed. For example, a sane person is more likely to not give his wallet to a stranger.

The researchers used the following mathematical model to identify the types of robots:

  1. A level 0 robot has only physical goals and cannot reason socially.

  2. A level 1 robot has physical and social goals, but assumes that all other robots have only physical goals.

  3. Level 1 robots can take actions based on the physical goals of other robots, such as helping or interfering.

  4. A level 2 robot assumes that other robots have social and physical goals; these robots can perform more complex actions, such as teaming up to help collectively.

Model efficiency

To test how their model compares to human concepts of social behavior, the scientists created 98 different scenarios with robots at levels 0, 1, and 2.

Twelve people watched 196 videos of robots interacting, and then they were asked to rate the physical and social goals of these robots.

In most cases, their model was consistent with what people thought about social interactions in each frame.

“We have a long-term interest both in creating computational models for robots and in the deeper study of human aspects. [социального поведения]… We want to find out what features of these videos people use to understand social interaction.

Can we do an objective test of the ability to recognize social interactions? Perhaps there is a way to teach people to recognize these social activities and improve their abilities.

We are still very far from this, but even just being able to effectively measure social action is already a big step forward, ”says Barbu.

Developing flexibility

Researchers are working to create a system with 3D agents in an environment that allows, for example, to manipulate household items. It is planned to include environments where actions might fail.

The researchers also want to include a neural network-based robot scheduler that learns from experience and then runs faster. Scientists also hope to conduct an experiment to collect data on traits that humans use to determine whether two robots are participating in social interactions.

“Hopefully we will have a benchmark that allows researchers to work on these social actions, which will inspire scientific and engineering advances that we have seen in other areas, such as object and action recognition,” says Barbu.

“I think this is a wonderful application of structured thinking to a complex but challenging problem,” says Tomer Ullman, associate professor in the Department of Psychology at Harvard University and head of the Computing, Cognition and Development Lab, who was not involved in this study.

“Even young children seem to understand social actions such as ‘help’ or ‘hinder’, but we do not yet have machines that can perform this reasoning with human-like flexibility. I think that models where agents think about the rewards of others and, taking into account the social context, plan how best to hinder or support them is a good step in the right direction. ”

In the meantime, robots are learning to be social, you can pay attention to our courses to understand how artificial intelligence works from the inside:

Also you can go to pages from catalogto find out how we train specialists in other areas of IT.

Professions and courses

Data Science and Machine Learning

Python, web development

Mobile development

Java and C #

From the basics to the depth

And:

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *