Artificial Intelligence Convinces Conspiracy Theorists

Previously, it was believed that the reason for ignorance was a lack of information, but the modern world shows that this is not true at all. There are many conspiracy theories, from quite adequate ones, in the spirit of: “the government is not telling the whole story about certain experiments”, to flat-earthers, supporters of 300-year history, etc. However, communication with artificial intelligence can convince these people.

This is an example of how a person who is confused by mental errors needs help and fresh information. After all, even if he searches for some facts, starting from the attitudes already built in his consciousness, he will only return to them again and again. More about mental features, flexibility of mind and tools for working with the contents of neural connections – community materials tell. Subscribe to stay up to date with new articles!

Artificial intelligence convinces

IN new research A team of scientists from American University, MIT and Cornell University have shown that conspiracy theorists can change their views after short conversations with artificial intelligence.

Study participants who believed in conspiracy theories, including a wide range of theories about the origins of the COVID-19 pandemic or external interference in the 2020 US presidential election, showed a significant and sustained decrease in conspiracy belief after speaking with the AI.

Fueled by polarized politics and social media disinformation, conspiracy theories have become a major public concern, often driving a wedge between theories’ proponents and their friends and family. And it is precisely in the development of artificial intelligence that risks are being seen. Deepfakes are said to be the engine of a new postmodernism.

A YouGov poll conducted last December shows that a significant proportion of Americans believe in various conspiracy theories.

Personal identity or misjudgment?

There is a widely held belief in the field of psychology, which is being challenged by these findings, that conspiracy theorists hold their beliefs because of the significance of those beliefs to a person's identity, and because those beliefs resonate with deep-seated drives and motivations.

Many conspiracy theorists are indeed willing to reconsider their beliefs if presented with compelling counter-evidence. I was initially quite surprised, but after reading the conversations between respondents and the AI, I became much less skeptical. The AI ​​provided pages of detailed accounts of why the conspiracy was false, and did so over and over again in each round of conversation, while also remaining friendly and “building a relationship” with the participant.

Thomas Costello, associate professor of psychology at American University and lead author of the new study.

The study involved more than 2,000 people who identified themselves as conspiracy believers. Conversations with the AI ​​reduced the average participant's belief in their chosen conspiracy theory by about 20 percent, and about 1 in 4 participants — all of whom believed in the conspiracy beforehand — disavowed the conspiracy theory after the conversation. I wonder if the same method can be used in reverse, to plant false beliefs in people's heads through conceptual substitution? Wouldn't it be a sign of postmodernism in a brave new world?

Until now, delivering convincing, factual messages to a large sample of conspiracy theorists in a lab experiment has been challenging. For one thing, conspiracy theorists are often very knowledgeable about the conspiracy—often more so than skeptics. Conspiracies also vary widely, so the evidence supporting a particular theory may differ among believers.

Artificial Intelligence as Intelligent Intervention

The new research comes as society debates the promise and perils of AI. The large language models that power generative AI have become powerful reservoirs of knowledge. The researchers emphasize that the study demonstrates one way these reservoirs of knowledge can be used for good: by helping people fine-tune their own beliefs and acquire more accurate ones.

The ability of artificial intelligence to connect different topics of information in a matter of seconds allows counterarguments to be tailored to a believer's specific conspiracy theories in an incredibly persuasive way that is impossible for a normal person.

Previous attempts to debunk dubious beliefs have had one major limitation: to debunk them, you have to guess what the other person actually believes, which is no easy task. Unlike humans, AI can respond directly to specific human arguments using strong counter-evidence. This provides a unique opportunity to test how receptive humans are to counter-evidence.

Gordon Pennycook, associate professor of psychology at Cornell University and co-author of the paper.

Chatbot as a cure for conspiracy theorists

Researchers developed a chatbotwhich would be highly persuasive and engage participants in their opinions. GPT-4, the AI ​​model behind ChatGPT, provided factual refutations for conspiracy theories. The procedure was structured as follows:

  • In two separate experiments, participants were asked to describe a conspiracy theory they believed in and provide evidence to support it.

  • Participants then engaged in a conversation with the AI. The AI's goal was to challenge beliefs by referring to specific evidence.

  • In the control group, participants discussed an unrelated topic with the AI.

To tailor the dialogue, the researchers provided the AI ​​with an initial statement about the participants’ beliefs and the rationale provided. This setup allowed for a more natural dialogue, with the AI ​​directly addressing the participant’s statements. The conversation lasted an average of 8.4 minutes and involved three rounds of interaction, not counting the initial setup. Given the specifics of AI development, in a couple of years such a dialogue will be conducted by a cluster of brain organoids grown on boards.

The result of persuasion

Ultimately, both experiments showed a reduction in participants’ belief in conspiracy theories. When the researchers followed up with the participants two months after the intervention, they found that the effect had persisted.

While the results are promising and point to a future in which AI may play a role in reducing belief in conspiracy theories when used responsibly, further research is needed into the long-term effects using different AI models and practical applications outside of laboratory settings.

While much ink has been spilled about the potential of generative AI to amplify misinformation, our research shows that it could also be part of the solution. Large language models like GPT4 have the potential to counter conspiracy theories on a massive scale.

David Rand, co-author of the paper and a professor at the MIT Sloan School of Management.

More weird news and materials about the brain, psyche, human consciousness and the role of both artificial intelligence and herbal supplements in all this – in the telegram channel. Subscribe to stay up to date with new articles!

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *