Jump to content
  • Sign Up
×
×
  • Create New...

Recommended Posts

  • Diamond Member

Addressing the risks of AI in research

Credit: Pixabay/CC0 Public Domain

Artificial intelligence (AI) is widely heralded for its potential to enhance productivity in scientific research. But with that promise come risks that could narrow scientists’ ability to better understand the world, according to a new paper co-authored by a Yale anthropologist.

Some future AI approaches, the authors argue, could constrict the questions researchers ask, the experiments they perform, and the perspectives that come to bear on scientific data and theories.

All told, these factors could leave people vulnerable to “illusions of understanding” in which they believe they comprehend the world better than they do.

The

This is the hidden content, please
is published in Nature.

“There is a risk that scientists will use AI to produce more while understanding less,” said co-author Lisa Messeri, an anthropologist in Yale’s Faculty of Arts and Sciences. “We’re not arguing that scientists shouldn’t use AI tools, but we’re advocating for a conversation about how scientists will use them and suggesting that we shouldn’t automatically assume that all uses of the technology, or the ubiquitous use of it, will benefit science.”

The paper, co-authored by Princeton cognitive scientist M. J. Crockett, sets a framework for discussing the risks involved in using AI tools throughout the scientific research process, from study design through peer review.

“We hope this paper offers a vocabulary for talking about AI’s potential epistemic risks,” Messeri said.

Added Crockett, “To understand these risks, scientists can benefit from work in the humanities and qualitative social sciences.”

Messeri and Crockett classified proposed visions of AI spanning the scientific process that are currently creating buzz among researchers into four archetypes:

  • In study design, they argue, “AI as Oracle” tools are imagined as being able to objectively and efficiently search, evaluate, and summarize massive scientific literatures, helping researchers to formulate questions in their project’s design stage.
  • In data collection, “AI as Surrogate” applications, it is hoped, allow scientists to generate accurate stand-in data points, including as a replacement for human study participants, when data is otherwise too difficult or expensive to obtain.
  • In data analysis, “AI as Quant” tools seek to surpass the human intellect’s ability to analyze vast and complex datasets.
  • And “AI as Arbiter” applications aim to objectively evaluate scientific studies for merit and replicability, thereby replacing humans in the peer-review process.

The authors warn against treating AI applications from these four archetypes as trusted partners, rather than simply tools, in the production of scientific knowledge. Doing so, they say, could make scientists susceptible to illusions of understanding, which can crimp their perspectives and convince them that they know more than they do.

The efficiencies and insights that AI tools promise can weaken the production of scientific knowledge by creating “monocultures of knowing,” in which researchers prioritize the questions and methods best suited to AI over other modes of inquiry, Messeri and Crockett state. A scholarly environment of that kind leaves researchers vulnerable to what they call “illusions of exploratory breadth,” where scientists wrongly believe that they are exploring all testable hypotheses, when they are only examining the narrower range of questions that can be tested through AI.

For example, “Surrogate” AI tools that seem to accurately mimic human survey responses could make experiments that require measurements of physical behavior or face-to-face interactions increasingly unpopular because they are slower and more expensive to conduct, Crockett said.

The authors also describe the possibility that AI tools become viewed as more objective and reliable than human scientists, creating a “monoculture of knowers” in which AI systems are treated as a singular, authoritative, and objective knower in place of a diverse scientific community of scientists with varied backgrounds, training, and expertise. A monoculture, they say, invites “illusions of objectivity” where scientists falsely believe that AI tools have no perspective or represent all perspectives when, in truth, they represent the standpoints of the computer scientists who developed and trained them.

“There is a belief around science that the objective observer is the ideal creator of knowledge about the world,” Messeri said. “But this is a myth. There has never been an objective ‘knower,’ there can never be one, and continuing to pursue this myth only weakens science.”

There is substantial evidence that human diversity makes science more robust and creative, the authors add.

“Acknowledging that science is a social practice that benefits from including diverse standpoints will help us realize its full potential,” Crockett said. “Replacing diverse standpoints with AI tools will set back the clock on the progress we’ve made toward including more perspectives in scientific work.”

It is important to remember AI’s social implications, which extend far beyond the laboratories where it is being used in research, Messeri said.

“We train scientists to think about technical aspects of new technology,” she said. “We don’t train them nearly as well to consider the social aspects, which is vital to future work in this domain.”

More information:
Lisa Messeri et al, Artificial intelligence and illusions of understanding in scientific research, Nature (2024).

This is the hidden content, please

Provided by
Yale University


Citation:
Doing more but learning less: Addressing the risks of AI in research (2024, March 8)
retrieved 8 March 2024
from

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





This is the hidden content, please

Science, Physics News, Science news, Technology News, Physics, Materials, Nanotech, Technology, Science
#Addressing #risks #research

This is the hidden content, please

Link to comment
https://hopzone.eu/forums/topic/334-addressing-the-risks-of-ai-in-research/
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Vote for the server

    To vote for this server you must login.

    Jim Carrey Flirting GIF

  • Recently Browsing   0 members

    • No registered users viewing this page.

Important Information

Privacy Notice: We utilize cookies to optimize your browsing experience and analyze website traffic. By consenting, you acknowledge and agree to our Cookie Policy, ensuring your privacy preferences are respected.