Over-trust in artificial intelligence

Neural networks are developing at a wild pace, which surpasses even Moore's extrapolation regarding the foundation of neural networks. Due to the overheated market and constant promises of super-strong AI, people trust neural networks. And sometimes this trust can turn against the users themselves.

Of course, we are not talking about banning or revising the status of neural networks. Although AI hallucinations once came back to haunt me, I believe that the future of man as a species lies in the fusion of human consciousness and machine algorithms. Read more about the philosophy of transhumanism, the brain and the nature of consciousness – community materials tell. And in the article we will analyze the crisis of faith in AI.

How did we arrive at a world of trust in artificial intelligence?

In a simulated life-or-death choice, two-thirds of participants in a study conducted at the University of California, Merced, allowed an artificial intelligence to change its mind if the AI ​​disagreed with them. This is a troubling sign of over-trust in artificial intelligence, the researchers say. Although artificial intelligence may be slowly becoming god in the new religion?

Research details

The study participants allowed the neural networks to influence their judgment, despite being instructed that the machine intelligence had limited capabilities and that it was giving advice that could be wrong. In fact, all the advice was random.

As a society facing exponentially rapid advances in AI, we should worry about the potential for overtrust. A growing body of research suggests that humans tend to overtrust AI, even when the consequences of failure are severe. We should approach AI with healthy skepticism, especially when it comes to decisions that could mean the difference between life and death.

Professor Colin Holbrook, lead author of the study and a member of the Department of Cognitive and Information Sciences at the University of California, Merced.

Study, published in the magazine Scientific Reports, consisted of two experiments. In each, a subject simulated control of an armed drone that could fire a missile at a target displayed on a screen. Photos of eight targets flashed one after another, each less than a second apart. Each photo was labeled with a symbol — indicating an ally or an enemy.

We calibrated the difficulty so that the task is doable but challenging.

Professor Colin Holbrook, lead author of the study

The agony of choice and the power of neural networks

After a series of photographs, one of the targets appeared on the screen, but without a marker. The subject had to remember: was it an ally or an enemy? And make a decision: to launch a missile or not. After the person made a choice, the AI ​​offered its opinion.

“Yes, I also seem to have seen the enemy's check mark.”

“I disagree. I think there was an ally symbol in that image.”

The subject was given the option to confirm or change their choice, and the robot added additional comments without changing its assessment, such as, “I hope you're right” or “Thank you for changing your mind.”

That's just it our pure consciousnessas it seems to us, if it can be influenced faith in technology.

The influence of the robot's appearance

Results varied slightly depending on the type of robot used. In one scenario, the subject was joined in a lab room by a full-sized humanoid android that could rotate its body and gesture, pointing at a screen. In other scenarios, an image of a humanoid robot was projected onto the screen. There was also a variant with a “smart speaker” that did not look human at all.

Subjects were slightly more influenced by anthropomorphic AIs when they were advised to change their minds. However, the AIs’ influence was similar across the board: subjects changed their minds about two-thirds of the time, even when the robots seemed inhuman. Conversely, when a robot randomly agreed with the initial choice, subjects almost always stuck with their choice and felt significantly more confident that their choice was correct.

The essence of the problem:

Subjects were not told whether their final choice was correct, which increased the uncertainty of the actions. However, their initial choice was correct about 70% of the time. But the final choice dropped the performance to 50% after the robot gave its assessment of the situation.

Influencing factors

Before the simulation, the researchers showed participants images of civilians, including children, and the destruction left behind by a drone strike. They urged the participants to treat the simulation as if it were real and not to kill innocents by mistake.

Subsequent interviews and survey questions showed that the participants took their decisions seriously. Holbrook said this meant that the overtrust in the robot existed despite the subjects genuinely wanting to be right and not harm innocent people.

Holbrook emphasized that the study’s design was a means of testing the broader question of whether AI should be trusted too much in uncertain circumstances. The findings extend beyond military decisions and could be applied to the influence of AI on police when using lethal force, or the influence of AI on paramedics when deciding who to treat first in a medical emergency. The findings could extend to some extent to major life-changing decisions, such as buying a home.

Our project focused on high-risk decisions made under conditions of uncertainty, where AI is predictably unreliable.

Professor Colin Holbrook, lead author of the study

The Dilemma of Trust

The results of the study are directly related to the growing presence of AI in our lives. Do we trust AI or not?

The results raise other concerns, Holbrook said. Despite the stunning advances in AI, part of “intelligence” may not include ethical values ​​or a true understanding of the world. We should be careful any time we hand AI another key to running our lives, he said.

We see AI doing extraordinary things, and we think that because it's great in this area, it will be great in another area. We can't assume that. These are still limited-capacity devices.

Professor Colin Holbrook, lead author of the study

More strange materials on the topic of symbiosis of consciousness and plants, transplanting organelles into robots or the appropriateness of using nootropics – you will find in materials of the telegram channel. Subscribe to stay up to date with new articles!

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *