Talk to yourself: scientists have taught robots introspection and improved their performance

Many of us talk to ourselves, not out loud, of course, but mentally. Psychologists say that this is quite normal and even useful. And not only for humans, but also for robots. Thus, Italian scientists deliberately taught humanoid robots think out loud. Through experiments, they proved that internal monologue helps even automatic systems to understand complex and uncertain situations. The central figure of the experiment was the Japanese robot assistant Pepper, developed by the corporation. SoftBank Robotics

The research is based on the work of the scientist and anthropologist Lev Vygotsky, who at one time created inner speech concept. By the way, he noticed that small children in the first years of life often voice their thoughts at the same time when they learn to speak with others. Later, this speech turns into an internal monologue. So, if everything is more or less clear with a person, then what about robots? It’s time to find out.

The essence of the experiment

The experiment was conducted by two scientists from the University of Palermo. They integrated an internal speech model into the robot’s operating system based on ACT-R… It is a robot control system that includes standard tools for converting text to speech and vice versa.

Experiment Objectives:

  1. Teach the robot to accompany actions with a voice: comment on all iterations in the moment.
  2. Assess the results and understand how speaking affects the results.

Pepper was given a task – cover dining table in accordance with the rules of etiquette and instructions received from the person. The latter sometimes contradicted the rules of etiquette “learned” by the robot. It was assumed that during the interaction, the robot may have various value judgments and questions of morality will emerge (no, the robot was not offered to do anything illegal).

During the experiment, the results obtained were evaluated when the robot used and did not use internal speech.

What happened?

The robot was shown a scheme responsible for the rules of etiquette.

She looked like this:

The man and the robot are seated at the table. Listening to the instructions and remembering the scheme, the robot had to perform the required actions. At the same time, the initial data could be anything: all the items on the table, some are missing, there are excess ones. A total of 60 iterations were carried out, 30 in each of two blocks: with and without voice acting of the internal monologue. 40 out of 60 cases contained contradiction and / or conflict.

Pepper was interacted with in three scenarios:

  1. Simple and straightforward execution of instructions without contradictions. Everything is logical here.
  2. There is a conflict between actions and requirements. In this case, the robot saw the problem: to perform the action, despite the contradiction, or not (to break the rules or not).
  3. False requirement: put an item that is already there. The robot was faced with a dilemma.

All attempts were evaluated according to several indicators:

  • decision time;
  • task execution time;
  • number of successful attempts;
  • transparency of operations.

The last point was analyzed by a person. Transparency was understood as the degree of clarity in the presentation of thoughts by the robot regarding the solution of the problem. An attempt was considered successful if it ended with the required action.

Outcomes

As expected, in simple and understandable cases, the internal monologue did not affect the actions in any way. The robot’s thoughts flowed in the most logical direction: it looked for the required item, took it, passed it on and outstripped the actions with words.

However, a completely different picture developed in situations where the robot was faced with a conflict of requirements. In this case, the robot saw the discrepancy and asked clarifying questions to the person. Only then did he perform the action. However, if the inner speech was turned off, then the attempt did not succeed. Why? The robot saw the conflict and refused to take the action.

The most interesting thing happened in the third scenario. The robot immediately identified the request as false and noted that the action, for example, had already been performed (the napkin was on the table or the fork was already lying). But that’s not all. The robot expressed alarm and tried to make sure if his partner was seeing it. Then the person said that he meant another object (a knife, for example), the robot agreed and successfully completed the iteration.

Here are the results of the experiment, here are two blocks of 30 iterations.

First block with internal monologue enabled:

26 – the number of successful attempts;
28 is the number of transparent iterations.

Second block with internal monologue turned off:

18 – the number of successful attempts;
12 is the number of transparent interactions.

That is, the actions of the robot were more successful if it analyzed what was happening. It turns out that introspection and reflection can help robots improve the quality of work, get out of uncertain situations and successfully solve the problem. In addition, the use of internal speech helps the robot to enter into a dialogue with a person and find new ways to solve the tasks assigned to him.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *