Jump to content
  • Sign Up
×
×
  • Create New...

Another OpenAI researcher quits—claims AI labs are taking a ‘very risky gamble’ with humanity amid the race toward AGI


Recommended Posts

  • Diamond Member

This is the hidden content, please

Another OpenAI researcher quits—claims AI labs are taking a ‘very risky gamble’ with humanity amid the race toward AGI

Another OpenAI safety researcher has left the company. In a post on X, Steven Adler called the global race toward AGI a “very risky gamble.”

This is the hidden content, please
safety researcher
This is the hidden content, please
he had left OpenAI late last year after four years at the company. In a post shared on X, Adler criticized a race toward AGI that is taking shape between leading AI labs and global superpowers.

“An AGI race is a very risky gamble, with huge downside,” he said. “No lab has a solution to AI alignment today. And the faster we race, the less likely that anyone finds one in time.” Alignment is the process of keeping AI working toward human goals and values, not against them.

Adler worked as an AI safety lead at OpenAI, leading safety-related research and programs for product launches and speculative long-term AI systems, per his

This is the hidden content, please
profile. He’s also listed as an author on
This is the hidden content, please

In the X post announcing his exit from the company, he called his time at OpenAI “a wild ride with lots of chapters,” adding he would “miss many parts of it.”

However, he said he was personally “pretty terrified by the pace of AI development.”

“When I think about where I’ll raise a future family, or how much to save for retirement, I can’t help but wonder: Will humanity even make it to that point?” he said.

“Today, it seems like we’re stuck in a really bad equilibrium. Even if a lab truly wants to develop AGI responsibly, others can still cut corners to catch up, maybe disastrously. And this pushes all to speed up. I hope labs can be candid about real safety regs needed to stop this,” he continued.

Adler’s fears have been echoed by some of the world’s leading AI researchers.

Stuart Russell, professor of computer science at the University of California, Berkeley,

This is the hidden content, please
the Financial Times that the “AGI race is a race towards the edge of a cliff.”

“Even the CEOs who are engaging in the race have stated that whoever wins has a significant probability of causing human extinction in the process, because we have no idea how to control systems more intelligent than ourselves,” he said.

Adler’s and Russell’s comments come amid increased attention on a global AI race between the U.S. and China. The news that

This is the hidden content, please
had potentially built an equal or better AI model than leading U.S. labs at a fraction of the cost
This is the hidden content, please
and sparked a reaction from leading tech figures, including
This is the hidden content, please

Altman said on Monday it was “invigorating to have a new competitor,” adding he would move up some of OpenAI’s new releases in response to DeepSeek’s “impressive model.” Altman said he looked “forward to bringing” the world “AGI and beyond.”

Adler did not immediately respond to a request for more details made by Fortune, outside of normal working hours.

It’s not the first time OpenAI has had to confront questions about its approach to AI safety.

The AI lab has been embroiled in several public scandals that appeared to stem from internal disagreements over AI safety, including one incident in late 2023 that saw Altman briefly removed from the company. Altman was reinstated five days later after pressure from employees and investors, but questions around his departure hung over the company for months.

Last year, OpenAI lost two prominent AI researchers, Ilya Sutskever and Jan Leike. The pair were co-leading the company’s Superalignment team at the time, which was focused on AI safety and had been working to achieve “scientific and technical breakthroughs to steer and control AI systems much smarter than us.”

Leike openly blamed the lack of safety concerns at the company for his departure, saying that over the past years, “safety culture and processes have taken a ********* to shiny products.”

In a post on X, Leike added that he had been “disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we reached a breaking point.”

While Sutskever has not publicly criticized OpenAI’s approach to AI safety, he has consistently highlighted the challenges of AI safety and alignment and was said to be a driving force behind Altman’s 2023 ouster.

Several former employees have also joined the chorus of voices criticizing the company’s internal commitment to AI safety.

Daniel Kokotajlo, a former OpenAI governance researcher,

This is the hidden content, please
This is the hidden content, please
This is the hidden content, please
half of the company’s staff that once focused on the long-term risks of superpowerful AI had left the company.

This story was originally featured on

This is the hidden content, please



This is the hidden content, please

#OpenAI #researcher #quitsclaims #labs #risky #gamble #humanity #race #AGI

This is the hidden content, please

This is the hidden content, please

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Vote for the server

    To vote for this server you must login.

    Jim Carrey Flirting GIF

  • Recently Browsing   0 members

    • No registered users viewing this page.

Important Information

Privacy Notice: We utilize cookies to optimize your browsing experience and analyze website traffic. By consenting, you acknowledge and agree to our Cookie Policy, ensuring your privacy preferences are respected.