Jump to content
  • Sign Up
×
×
  • Create New...

Can AI talk us out of conspiracy theory rabbit holes?


Recommended Posts

  • Diamond Member

This is the hidden content, please

Can AI talk us out of *********** theory rabbit holes?

data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///ywAAAAAAQABAAACAUwAOw==
Credit: Pixabay/CC0 Public Domain

New research

This is the hidden content, please
shows that for some people who believe in *********** theories, a fact-based conversation with an artificial intelligence (AI) chatbot can “pull them out of the rabbit *****.” Better yet, it seems to keep them out for at least two months.

This research, carried out by Thomas Costello at the Massachusetts Institute of Technology and colleagues, shows promise for a challenging social problem: belief in *********** theories.

Some *********** theories are relatively harmless, such as

This is the hidden content, please
(which is fine, until you meet a Finn). Other theories, though, reduce trust in
This is the hidden content, please
and
This is the hidden content, please
.

This becomes a problem when *********** theories persuade people not to

This is the hidden content, please
or not to
This is the hidden content, please
. At its most extreme, belief in *********** theories has been associated with
This is the hidden content, please
This is the hidden content, please
.

*********** theories are ‘sticky’

Despite the negative impacts of *********** theories, they have proven very “sticky.” Once people believe in a *********** theory, changing their mind is hard.

The reasons for this are complex. *********** theorist beliefs are

This is the hidden content, please
, and *********** theorists have
This is the hidden content, please
done
This is the hidden content, please
to reach their position.

When a person no longer trusts science or anyone outside their community, it’s hard to change their beliefs.

Enter AI

The ********** of generative AI into the public sphere has increased concerns about people believing in things that aren’t true. AI makes it

This is the hidden content, please
This is the hidden content, please
to create
This is the hidden content, please
.

Even if used in good ******, AI systems can get facts wrong. (ChatGPT and other chatbots even warn users that they might be wrong about some topics.)

AI systems also contain widespread biases, meaning they can promote negative beliefs about some groups of people.

Given all this, it’s quite surprising that a chat with a system known to produce fake news can convince some people to abandon *********** theories, and that the change seems to be long lasting.

However, this new research leaves us with a good-news/bad-news problem.

It’s great we’ve identified something that has some effect on *********** theorist beliefs! But if AI chatbots are good at talking people out of sticky, anti-scientific beliefs, what does that mean for true beliefs?

What can the chatbots do?

Let’s dig into the new research in more detail. The researchers were interested to know whether factual arguments could be used to persuade people against *********** theorist beliefs.

This research used over 2,000 participants across two studies, all chatting with an AI chatbot after describing a *********** theory they believed. All participants were told they were talking to an AI chatbot.

The people in the “treatment” group (60% of all participants) conversed with a chatbot that was personalized to their particular *********** theory, and the reasons why they believed in it. This chatbot tried to convince these participants that their beliefs were wrong using factual arguments over three rounds of conversation (the participant and the chatbot each taking a turn to talk is a round). The other half of participants had a general discussion with a chatbot.

The researchers found that about 20% of participants in the treatment group showed a reduced belief in *********** theories after their discussion. When the researchers checked in with participants two months later, most of these people still showed reduced belief in *********** theories. The scientists even checked whether the AI chatbots were accurate, and they (mostly) were.

We can see that for some people at least, a three-round conversation with a chatbot can persuade them against a *********** theory.

So we can fix things with chatbots?

Chatbots do offer some promise with two of the challenges in addressing false beliefs.

Because they are computers,

This is the hidden content, please
, making what they say more trustworthy (especially to someone who has lost ****** in public institutions).

Chatbots can also put together an argument, which is better than facts alone. A simple recitation of facts

This is the hidden content, please
against fake beliefs.

Chatbots aren’t a cure-all though. This study showed they were more effective for people who didn’t have strong personal reasons for believing in a *********** theory, meaning they probably won’t help people for whom *********** is community.

So should I use ChatGPT to check my facts?

This study demonstrates how persuasive chatbots can be. This is great when they are primed to convince people of facts, but what if they aren’t?

One major way chatbots can promote misinformation or conspiracies is when their underlying data is wrong or biased:

This is the hidden content, please
.

Some chatbots are designed

This is the hidden content, please
or
This is the hidden content, please
. You can even chat to versions of ChatGPT customized to
This is the hidden content, please
.

A second, more worrying probability, is that as chatbots respond to biased prompts (that searchers may not realize are biased), they may perpetuate misinformation (including *********** beliefs).

We already know that people are bad at fact checking and when they use search engines to do so, those search engines respond to their (unwittingly biased) search terms,

This is the hidden content, please
. Chatbots are
This is the hidden content, please
.

Ultimately, chatbots are a tool. They may be helpful in debunking *********** theories—but like any tool, the skill and intention of the toolmaker and user matter. *********** theories start with people, and it will be people that end them.

Provided by
The Conversation


This article is republished from

This is the hidden content, please
under a Creative Commons license. Read the
This is the hidden content, please
.data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///ywAAAAAAQABAAACAUwAOw==

Citation:
Can AI talk us out of *********** theory rabbit holes? (2024, September 14)
retrieved 14 September 2024
from

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.




This is the hidden content, please

#talk #*********** #theory #rabbit #holes

This is the hidden content, please

This is the hidden content, please

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Vote for the server

    To vote for this server you must login.

    Jim Carrey Flirting GIF

  • Recently Browsing   0 members

    • No registered users viewing this page.

Important Information

Privacy Notice: We utilize cookies to optimize your browsing experience and analyze website traffic. By consenting, you acknowledge and agree to our Cookie Policy, ensuring your privacy preferences are respected.