Jump to content
  • Sign Up
×
×
  • Create New...

Why the fake news confidence trap could be your downfall | Technology


Recommended Posts

  • Diamond Member

This is the hidden content, please

Why the fake news confidence trap could be your downfall | Technology

It’s a wild world out there online, with dis- and misinformation flying about at pace. I’m part-way through writing a book about the history of fake news, so I’m well aware that people making stuff up is not new. But what is new is the reach that troublemakers have, whether their actions are deliberate or accidental.

Social media and the wider web changed the game for mischief-makers, and made it easier for the rest of us to be inadvertently hoodwinked online (see: the odd “Goodbye Meta AI” trend that I wrote about this week for the Guardian). The rise of generative AI since the release of ChatGPT in 2022 has also supercharged the risks. While early research suggests our biggest fears about the impact of AI-generated deepfakes on elections are unfounded, the overall information environment is a puzzling one.

Seeing is believing?

That much is evident in information gathered by the Behavioural Insights Team (BIT) – a social-purpose organisation spun out of the *** government – and shared exclusively with me for TechScape. The survey of 2,000 *** adults highlights just how confusing the wild west web is at present.

While 59% of those surveyed by BIT think they can spot false information online, BIT researchers found that only 36% of people were confident that others could spot fake news.

That’s a problem for two reasons. One is our low confidence in other people’s ability to identify false stories. The other is the gulf in perception between our own abilities and those of the public at large. I suspect that if we actually measured how well people discern disinformation from the truth, it’d be closer to the lower number than the higher one. In short, we tend to think we’re smarter than we are.

Don’t believe me? For my first book,

This is the hidden content, please
, I commissioned a survey by YouGov to see how well the public recognised major figures on the platform. The team at YouGov recommended that, among the real names, I should add someone who didn’t exist as a kind of sense check to identify the proportion of people who were bluffing. A worrying number of respondents confidently said they knew the person the pollsters had invented – and knew them well.

A morass of misinformation

All of this matters because of the scale of the false info problem that’s out there.

Three-quarters of respondents to the BIT survey said they’d seen fake news in the last week, with X (which pared back its content moderation teams to the bone in favour of Elon Musk’s free speech absolutism) and TikTok considered the worst offenders. LinkedIn was seen as the least worst (but it’s not entirely clear if that’s because many avoid platform because it has a reputation as being boring).

Regardless, the findings sit uneasily with those who conducted it. “Our latest research has added to the growing evidence that social media users are overconfident in their ability to spot false information,” says Eva Kolker, head of consumer and business markets at BIT. “Paradoxically, this might actually be making people more susceptible to it.”

In short: if you think you’re better than others at spotting fake news, you’re actually more likely to have lower defences – and fall foul of it when you (inevitably) encounter it online.

What should be done?

Responsibility to detect fake news ***** with social media platforms and governments, not just individuals, the BIT says. Photograph: Rebecca Lewington/Cerebras Systems/Reuters

Well, a start would be empowering users to be more aware of the risks of fake news and the impact it can have when shared with their social circles. Things snowball quickly thanks to the mob mentality mediated by social media algorithms, such as that Goodbye Meta AI post. So thinking twice and clicking once is important. (I outlined better ways to combat threats to our data in this piece.)

But Kolker isn’t convinced that’s enough. “Many of our attempts to improve online safety have focused on improving the knowledge and capability of individual users”, she says.“While important, our research shows there are inherent limits to the effectiveness of this approach”.

“We can’t just rely on individuals changing their behaviour. To really combat misinformation we also need social media platforms to take action and regulators and government to intervene to level the playing field.”

skip past newsletter promotion

Is it time for an intervention?

The BIT came up with a slate of recommendations they’d put to governments and social media platforms to try to combat mis- and disinformation. First among them is to flag posts that contain false information as soon as they’re spotted, to try and make the public aware before they share. To Meta’s credit, that’s something it did with the Goodbye Meta AI trend, adding labels to posts pointing out that the information was not correct.

The BIT also recommends that platforms become more stringent in how much legal but harmful content they show. *********** theories fester in a putrid information environment, and the BIT seem to be suggesting that the standard Silicon Valley approach – that sunlight is the best disinfectant – doesn’t cut it.

Except in one instance. Their third recommendation is regular public rankings of how much false or harmful content is on each platform.

Whether any of this will work is tough to say. I’ve been looking at the science alongside studies and surveys like BIT’s lately for a number of reasons, and every positive intervention also appears to have its drawbacks. But if the Goodbye Meta AI trend going viral shows us anything, it’s that we can’t just assume people are able to distinguish what’s real from what’s not.

Chris Stokel-Walker’s most recent book is

This is the hidden content, please
. His next book, about the history of fake news, is due out in spring 2025.

The wider TechScape

Musau Mutisya uses the PlantVillage app to diagnose a maize plant on his farm in Matungulu sub-county in Machakos county, Kenya Photograph: Stephen Mukhongi/The Guardian



This is the hidden content, please

#fake #news #confidence #trap #downfall #Technology

This is the hidden content, please

This is the hidden content, please

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Vote for the server

    To vote for this server you must login.

    Jim Carrey Flirting GIF

  • Recently Browsing   0 members

    • No registered users viewing this page.

Important Information

Privacy Notice: We utilize cookies to optimize your browsing experience and analyze website traffic. By consenting, you acknowledge and agree to our Cookie Policy, ensuring your privacy preferences are respected.