Jump to content
  • Sign Up
×
×
  • Create New...

OpenAI removes AI safety executive Aleksander Madry from role, reassigns him to AI reasoning


Recommended Posts

  • Diamond Member

This is the hidden content, please

OpenAI removes AI safety executive Aleksander Madry from role, reassigns him to AI reasoning

OpenAI last week removed Aleksander Madry, one of OpenAI’s top safety executives, from his role and reassigned him to a job focused on AI reasoning, sources familiar with the situation confirmed to CNBC.

Madry was OpenAI’s head of preparedness, a team that was “tasked with tracking, evaluating, forecasting, and helping protect against catastrophic risks related to frontier AI models,” according to a

This is the hidden content, please
for Madry.

Madry is also director of MIT’s Center for Deployable Machine Learning and a faculty co-lead of the MIT AI Policy Forum, roles from which he is currently on leave, according to the university’s

This is the hidden content, please
.

The decision to reassign Madry came less than a week before a group of Democratic senators sent a letter to OpenAI CEO Sam Altman concerning “questions about how OpenAI is addressing emerging safety concerns.”

The

This is the hidden content, please
, sent Monday and viewed by CNBC, also stated, “We seek additional information from OpenAI about the steps that the company is taking to meet its public commitments on safety, how the company is internally evaluating its progress on those commitments, and on the company’s identification and mitigation of cybersecurity threats.”

OpenAI did not immediately respond to a request for comment.

The lawmakers requested that OpenAI respond with a series of answers to specific questions about its safety practices and financial commitments by August 13.

It’s all part of a summer of mounting safety concerns and controversies surrounding OpenAI, which along with

This is the hidden content, please
This is the hidden content, please
, Meta and other companies is at the helm of a generative AI arms race — a market that is predicted to
This is the hidden content, please
 in revenue within a decade — as companies in seemingly every industry rush to add AI-powered chatbots and agents to avoid being left behind by competitors.

Earlier this month,

This is the hidden content, please
 gave up its observer seat on OpenAI’s board, stating in a letter viewed by CNBC that it can now step aside because it’s satisfied with the construction of the startup’s board, which has been revamped in the eight months since an uprising that led to the brief ouster of CEO Sam Altman and threatened
This is the hidden content, please
’s massive investment into OpenAI.

But last month, a group of current and former OpenAI employees published an 

This is the hidden content, please
describing concerns about the artificial intelligence industry’s rapid advancement despite a lack of oversight and an absence of whistleblower protections for those who wish to speak up.

“AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this,” the employees wrote at the time.

Days after the letter was published, a source familiar to the mater confirmed to CNBC that the FTC and the Department of Justice were set to open antitrust investigations into OpenAI,

This is the hidden content, please
and Nvidia, focusing on the companies’ conduct.

FTC Chair Lina Khan has described her agency’s action as a “market inquiry into the investments and partnerships being formed between AI developers and major cloud service providers.”

The current and former employees wrote in the June letter that AI companies have “substantial non-public information” about what their technology can do, the extent of the safety measures they’ve put in place and the risk levels that technology has for different types of harm.

“We also understand the serious risks posed by these technologies,” they wrote, adding that the companies “currently have only weak obligations to share some of this information with governments, and none with civil society. We do not think they can all be relied upon to share it voluntarily.”

In May, OpenAI decided to disband its team focused on the long-term risks of AI just one year after it announced the group, a person familiar with the situation confirmed to CNBC at the time.

The person, who spoke on condition of anonymity, said some of the team members are being reassigned to other teams within the company.

The team was disbanded after its leaders, OpenAI co-founder Ilya Sutskever and Jan Leike, announced their departures from the startup in May. Leike wrote in a post on X that OpenAI’s “safety culture and processes have taken a ********* to shiny products.”

CEO 

This is the hidden content, please
 at the time on X he was sad to see Leike leave and that the company had more work to do. Soon after, OpenAI co-founder Greg Brockman 
This is the hidden content, please
 a statement attributed to Brockman and Altman on X, asserting that the company has “raised awareness of the risks and opportunities of AGI so that the world can better prepare for it.”

“I joined because I thought OpenAI would be the best place in the world to do this research,” Leike 

This is the hidden content, please
at the time. “However, I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.”

Leike wrote that he believes much more of the company’s bandwidth should be focused on security, monitoring, preparedness, safety and societal impact.

“These problems are quite hard to get right, and I am concerned we aren’t on a trajectory to get there,” he wrote. “Over the past few months my team has been sailing against the wind. Sometimes we were struggling for [computing resources] and it was getting ******* and ******* to get this crucial research done.”

Leike added that OpenAI must become a “safety-first AGI company.”

“Building smarter-than-human machines is an inherently dangerous endeavor,” he wrote at the time. “OpenAI is shouldering an enormous responsibility on behalf of all of humanity. But over the past years, safety culture and processes have taken a ********* to shiny products.”

The Information first

This is the hidden content, please
about Madry’s reassignment.



This is the hidden content, please

#OpenAI #removes #safety #executive #Aleksander #Madry #role #reassigns #reasoning

This is the hidden content, please

This is the hidden content, please

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Vote for the server

    To vote for this server you must login.

    Jim Carrey Flirting GIF

  • Recently Browsing   0 members

    • No registered users viewing this page.

Important Information

Privacy Notice: We utilize cookies to optimize your browsing experience and analyze website traffic. By consenting, you acknowledge and agree to our Cookie Policy, ensuring your privacy preferences are respected.