Jump to content
  • Sign Up
×
×
  • Create New...

AI could cause ‘social ruptures’ between people who disagree on its sentience | Artificial intelligence (AI)


Recommended Posts

  • Diamond Member

This is the hidden content, please

AI could cause ‘social ruptures’ between people who disagree on its sentience | Artificial intelligence (AI)

Significant “social ruptures” between people who think artificial intelligence systems are conscious and those who insist the technology feels nothing are looming, a leading philosopher has said.

The comments, from Jonathan Birch, a professor of philosophy at the London School of Economics, come as governments prepare to gather this week in San Francisco to accelerate the creation of guardrails to

This is the hidden content, please
of AI.

Last week, a transatlantic group of academics predicted that the

This is the hidden content, please
and one has now said this could result in “subcultures that view each other as making huge mistakes” about whether computer programmes are owed similar ******** rights as humans or animals.

Birch said he was “worried about major societal splits”, as people differ over whether AI systems are actually capable of feelings such as pain and joy.

The debate about the consequence of sentience in AI has echoes of science fiction films, such as Steven Spielberg’s AI (2001) and Spike Jonze’s Her (2013), in which humans grapple with the feeling of AIs. AI safety bodies from the US, *** and other nations will meet tech companies this week to develop stronger safety frameworks as the technology rapidly advances.

There are already significant differences between how different countries and religions view animal sentience, such as between India, where hundreds of millions of people are vegetarian, and America which is one of the largest consumers of meat in the world. Views on the sentience of AI could break along similar lines, while the view of theocracies, like Saudi Arabia, which is positioning itself as an AI hub, could also differ from secular states. The issue could also cause tensions within families with people who develop close relationships with chatbots, or even AI avatars of deceased loved ones, clashing with relatives who believe that only flesh and blood creatures have consciousness.

Birch, an expert in animal sentience who has pioneered work leading to a growing number of bans on octopus farming, was a co-author of a study involving academics and AI experts from New York University, Oxford University, Stanford University and the Eleos and Anthropic AI companies that says the prospect of AI systems with their own interests and moral significance “is no longer an issue only for sci-fi or the distant future”.

They want the big tech firms developing AI to start taking it seriously by determining the sentience of their systems to assess if their models are capable of happiness and suffering, and whether they can be benefited or harmed.

“I’m quite worried about major societal splits over this,” Birch said. “We’re going to have subcultures that view each other as making huge mistakes … [there could be] huge social ruptures where one side sees the other as very cruelly exploiting AI while the other side sees the first as deluding itself into thinking there’s sentience there.”

But he said AI firms “want a really tight focus on the reliability and profitability … and they don’t want to get sidetracked by this debate about whether they might be creating more than a product but actually creating a new form of conscious being. That question, of supreme interest to philosophers, they have commercial reasons to downplay.”

One method of determining how conscious an AI is could be to follow the system of markers used to guide policy about animals. For example, an octopus is considered to have greater sentience than a snail or an oyster.

Any assessment would effectively ask if a chatbot on your phone could actually be happy or sad or if the robots programmed to do your domestic chores suffer if you do not treat them well. Consideration would even need to be given to whether an automated warehouse system had the capacity to feel thwarted.

Another author, Patrick Butlin, research fellow at Oxford University’s Global Priorities Institute, said: “We might identify a risk that an AI system would try to resist us in a way that would be dangerous for humans” and there might be an argument to “slow down AI development” until more work is done on consciousness.

skip past newsletter promotion

A weekly ***** in to how technology is shaping our lives

Privacy Notice: Newsletters may contain info about charities, online ads, and content funded by outside parties. For more information see our Privacy Policy. We use

This is the hidden content, please
reCaptcha to protect our website and the
This is the hidden content, please
This is the hidden content, please
and
This is the hidden content, please
apply.

after newsletter promotion

“These kinds of assessments of potential consciousness aren’t happening at the moment,” he said.

This is the hidden content, please
and Perplexity, two leading US companies involved in building AI systems, declined to comment on the academics’ call to assess their models for sentience. Meta, Open AI and
This is the hidden content, please
also did not respond.

Not all experts agree on the looming consciousness of AI systems. Anil Seth, a leading neuroscientist and consciousness researcher,

This is the hidden content, please
it “******** far away and might not be possible at all. But even if unlikely, it is unwise to dismiss the possibility altogether”.

He distinguishes between intelligence and consciousness. The former is the ability to do the right thing at the right time, the latter is a state in which we are not just processing information but “our minds are filled with light, colour, shade and shapes. Emotions, thoughts, beliefs, intentions – all feel a particular way to us.”

But AI large-language models, trained on billions of words of human writing, have already started to show they can be motivated at least by concepts of pleasure and pain. When AIs including Chat GPT-4o were tasked with maximising points in a game, researchers found that if there was a trade-off included between getting more points and “feeling” more pain, the AIs would make it, another

This is the hidden content, please
showed.



This is the hidden content, please

#social #ruptures #people #disagree #sentience #Artificial #intelligence

This is the hidden content, please

This is the hidden content, please


Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Vote for the server

    To vote for this server you must login.

    Jim Carrey Flirting GIF

  • Recently Browsing   0 members

    • No registered users viewing this page.

Important Information

Privacy Notice: We utilize cookies to optimize your browsing experience and analyze website traffic. By consenting, you acknowledge and agree to our Cookie Policy, ensuring your privacy preferences are respected.