Jump to content
  • Sign Up
×
×
  • Create New...

TechScape: The people charged with making sure AI doesn’t destroy humanity have left the building | Technology


Recommended Posts

  • Diamond Member



TechScape: The people charged with making sure AI doesn’t ******** humanity have left the building | Technology

This is the hidden content, please
. I’m in Seoul for the International AI summit, the half-year follow-up to last year’s Bletchley Park AI safety summit (the full sequel will be in Paris this autumn). While you read this, the first day of events will have just wrapped up – though, in keeping with the reduced fuss this time round, that was merely a “virtual” leaders’ meeting.

When the date was set for this summit – alarmingly late in the day for, say, a journalist with two preschool children for whom four days away from home is a juggling act – it was clear that there would be a lot to cover. The hot AI summer is upon us:

The inaugural AI safety summit at Bletchley Park in the *** last year announced an international testing framework for AI models, after calls … for a six-month pause in development of powerful systems.

There has been no pause. The Bletchley declaration, signed by ***, US, EU, China and others, hailed the “enormous global opportunities” from AI but also warned of its potential for causing “catastrophic” harm. It also secured a commitment from big tech firms including OpenAI,

This is the hidden content, please
and Mark Zuckerberg’s Meta to cooperate with governments on testing their models before they are released.

While the *** and US have established national AI safety institutes, the industry’s development of AI has continued … OpenAI released GPT-4o (the o stands for “omni”) for free online; a day later,

This is the hidden content, please
previewed a new AI assistant called Project Astra, as well as updates to its Gemini model. Last month, Meta released new versions of its own AI model, Llama And in March, the AI startup Anthropic, formed by former OpenAI staff who disagreed with Altman’s approach, updated its Claude model.

Then, the weekend before the summit kicked off, everything kicked off at OpenAI as well. Most eye-catchingly, perhaps, the company found itself in a row with Scarlett Johansson over one of the voice options available in the new iteration of ChatGPT. Having approached the actor to lend her voice to its new assistant, an offer she declined twice, OpenAI launched ChatGPT-4o with “Sky” talking through its new capabilities. The similarity to Johansson was immediately obvious to all, even before CEO Sam Altman tweeted “her” after the presentation (the name of the Spike Jonze film in which Johansson voiced a super-intelligent AI). Despite denying the similarity, the Sky voice option has been removed.

More importantly though, the two men leading the company/nonprofit/secret villainous organisation’s “superalignment” team – which was devoted to ensuring that its efforts to build a superintelligence don’t end humanity – quit. First to go was Ilya Sutskever, the co-founder of the organisation and leader of the boardroom coup which, temporarily and ineffectually, ousted Altman. His exit raised eyebrows, but it was hardly unforeseen. You come at the king, you best not miss. Then, on Friday, Jan Leike, Sutskever’s co-lead of superalignment also left, and had a lot more to say:

A former senior employee at OpenAI has said the company behind ChatGPT is prioritising “shiny products” over safety, revealing that he quit after a disagreement over key aims reached “breaking point”.

Leike detailed the reasons for his departure in a thread on X posted on Friday, in which he said safety culture had become a lower priority. “Over the past years, safety culture and processes have taken a ********* to shiny products,” he wrote.

“These problems are quite hard to get right, and I am concerned we aren’t on a trajectory to get there,” he wrote, adding that it was getting “******* and *******” for his team to do its research.

“Building smarter-than-human machines is an inherently dangerous endeavour. OpenAI is shouldering an enormous responsibility on behalf of all of humanity,” Leike wrote, adding that OpenAI “must become a safety-first AGI [artificial general intelligence] company”.

Leike’s resignation note was a rare insight into dissent at the group, which has previously been portrayed as almost single-minded in its pursuit of its – which sometimes means Sam Altman’s – goals. When the charismatic chief executive was fired, it was reported that almost all staff had accepted offers from

This is the hidden content, please
to follow him to a new AI lab set up under the House of Gates, which also has the largest external stake in OpenAI’s corporate subsidiary. Even when a number of staff quit to form Anthropic, a rival AI company that distinguishes itself by talking up how much it focuses on safety, the amount of *****-talking was kept to a minimum.

It turns out (surprise!) that’s not because everyone loves each other and has nothing bad to say.

This is the hidden content, please
:

I have seen the extremely restrictive off-boarding agreement that contains nondisclosure and non-disparagement provisions former OpenAI employees are subject to. It forbids them, for the rest of their lives, from criticizing their former employer. Even acknowledging that the NDA exists is a violation of it.

If a departing employee declines to sign the document, or if they violate it, they can lose all vested equity they earned during their time at the company, which is likely worth millions of dollars. One former employee, Daniel Kokotajlo, who posted that he quit OpenAI “due to losing confidence that it would behave responsibly around the time of AGI”, has confirmed publicly that he had to surrender what would have likely turned out to be a huge sum of money in order to quit without signing the document.

Barely a day later, Altman said the clawback provisions “should never have been something we had in any documents”. He added: “we have never clawed back anyone’s vested equity, nor will we do that if people do not sign a separation agreement. this is on me and one of the few times I’ve been genuinely embarrassed running openai; i did not know this was happening and i should have.” (Capitalisation model’s own.)

Altman didn’t address the wider allegations, of a strict and broad NDA; and, while he promised to fix the clawback provision, nothing was said about the other incentives, carrot and stick, offered to employees to sign the exit paperwork.

As set-dressing goes, it’s perfect. Altman has been a significant proponent of state and interstate regulation of AI. Now we see why it might be necessary. If OpenAI, one of the biggest and best-resourced AI labs in the world, which claims that safety is at the root of everything it does, can’t even keep its own team together, then what hope is there for the rest of the industry?

skip past newsletter promotion

Sloppy

The ‘Shrimp ******’ is an example of the outlandish AI-generated art being shared on
This is the hidden content, please

It’s fun to watch a term of art developing in front of your eyes. Post had junk mail; email had spam; the AI world has slop:

“Slop” is what you get when you shove artificial intelligence-generated material up on the web for anyone to view.

Unlike a chatbot, the slop isn’t interactive, and is rarely intended to actually answer readers’ questions or serve their needs.

But like spam, its overall effect is negative: the lost time and effort of users who now have to wade through slop to find the content they’re actually seeking far outweighs the profit to the slop creator.

I’m keen to help popularise the term, for much the same reasons as Simon Willison, the developer who brought its emergence to my attention: it’s crucial to have easy ways to talk about AI done badly, to preserve the ability to acknowledge that AI can be done well.

The existence of spam implies emails that you want to receive; the existence of slop entails AI content that is desired. For me, that’s content I’ve generated myself, or at least that I’m expecting to be AI-generated. No one cares about the dream you had last night, and no one cares about the response you got from ChatGPT. Keep it to yourself.





This is the hidden content, please

#TechScape #people #charged #making #doesnt #******** #humanity #left #building #Technology

This is the hidden content, please

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Vote for the server

    To vote for this server you must login.

    Jim Carrey Flirting GIF

  • Recently Browsing   0 members

    • No registered users viewing this page.

Important Information

Privacy Notice: We utilize cookies to optimize your browsing experience and analyze website traffic. By consenting, you acknowledge and agree to our Cookie Policy, ensuring your privacy preferences are respected.