Diamond Member Pelican Press 0 Posted May 17, 2024 Diamond Member Share Posted May 17, 2024 The OpenAI team tasked with protecting humanity is no more In the summer of 2023, OpenAI created a “Superalignment” team whose goal was to steer and control future AI systems that could be so powerful they could lead to human extinction. Less than a year later, that team is *****. OpenAI This is the hidden content, please Sign In or Sign Up Bloomberg that the company was “integrating the group more deeply across its research efforts to help the company achieve its safety goals.” But a series of tweets from Jan Leike, one of the team’s leaders who recently quit revealed internal tensions between the safety team and the larger company. In a statement This is the hidden content, please Sign In or Sign Up on Friday, Leike said that the Superalignment team had been fighting for resources to get research done. “Building smarter-than-human machines is an inherently dangerous endeavor,” Leike wrote. “OpenAI is shouldering an enormous responsibility on behalf of all of humanity. But over the past years, safety culture and processes have taken a ********* to shiny products.” OpenAI did not immediately respond to a request for comment from Engadget. Leike’s departure earlier this week came hours after OpenAI chief scientist Sutskevar announced that he was leaving the company. Sutskevar was not only one of the leads on the Superalignment team, but helped co-found the company as well. Sutskevar’s move came six months after he was involved in a decision to ***** CEO Sam Altman over concerns that Altman hadn’t been “consistently candid” with the board. Altman’s all-too-brief ouster sparked an internal revolt within the company with nearly 800 employees signing a letter in which they threatened to quit if Altman wasn’t reinstated. Five days later, Altman was back as OpenAI’s CEO after Sutskevar had signed a letter stating that he regretted his actions. When it This is the hidden content, please Sign In or Sign Up the creation of the Superalignment team, OpenAI said that it would dedicate 20 percent of its computer power over the next four years to solving the problem of controlling powerful AI systems of the future. “[Getting] this right is critical to achieve our mission,” the company wrote at the time. On X, Leike This is the hidden content, please Sign In or Sign Up that the Superalignment team was “struggling for compute and it was getting ******* and *******” to get crucial research around AI safety done. “Over the past few months my team has been sailing against the wind,” he wrote and added that he had reached “a breaking point” with OpenAI’s leadership over disagreements about the company’s core priorities. Over the last few months, there have been more departures from the Superalignment team. In April, OpenAI This is the hidden content, please Sign In or Sign Up fired two researchers, Leopold Aschenbrenner and Pavel Izmailov, for allegedly leaking information. OpenAI told Bloomberg that its future safety efforts will be led by John Schulman, another co-founder, whose research focuses on large language models. Jakub Pachocki, a director who led the development of GPT-4 — one of OpenAI’s flagship large language models — would This is the hidden content, please Sign In or Sign Up Sutskevar as chief scientist. Superalignment wasn’t the only team at OpenAI focused on AI safety. In October, the company This is the hidden content, please Sign In or Sign Up a brand new “preparedness” team to stem potential “catastrophic risks” from AI systems including cybersecurity issues and chemical, nuclear and biological threats. Update, May 17 2024, 3:28 PM ET: In response to a request for comment on Leike’s allegations, an OpenAI PR person directed Engadget to Sam Altman’s This is the hidden content, please Sign In or Sign Up saying that he’d say something in the next couple of days. This article contains affiliate links; if you click such a link and make a purchase, we may earn a commission. This is the hidden content, please Sign In or Sign Up Ilya Sutskevar, news, gear, Sam Altman, OpenAI, Superalignment, AI #OpenAI #team #tasked #protecting #humanity This is the hidden content, please Sign In or Sign Up 0 Quote Link to comment https://hopzone.eu/forums/topic/33640-the-openai-team-tasked-with-protecting-humanity-is-no-more/ Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.