Jump to content
  • Sign Up
×
×
  • Create New...

[AI]NO FAKES Act: AI deepfakes protection or internet freedom threat?


Recommended Posts

  • Diamond Member

Critics fear the revised NO FAKES Act has morphed from targeted AI deepfakes protection into sweeping censorship powers.

What began as a seemingly reasonable attempt to tackle AI-generated deepfakes has snowballed into something far more troubling, according to digital rights advocates. The much-discussed Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act – originally aimed at preventing unauthorised digital replicas of people – now threatens to fundamentally alter how the internet functions.

The bill’s expansion has set alarm bells ringing throughout the tech community. It’s gone well beyond simply protecting celebrities from fake videos to potentially creating a sweeping censorship framework.

From sensible safeguards to sledgehammer approach

The initial idea wasn’t entirely misguided: to create protections against AI systems generating fake videos of real people without permission. We’ve all seen those unsettling deepfakes circulating online.

But rather than crafting narrow, targeted measures, lawmakers have opted for what the

This is the hidden content, please
calls a “federalised image-licensing system” that goes far beyond reasonable protections.

“The updated bill doubles down on that initial mistaken approach,” the EFF notes, “by mandating a whole new censorship infrastructure for that system, encompassing not just images but the products and services used to create them.”

What’s particularly worrying is the NO FAKES Act’s requirement for nearly every internet platform to implement systems that would not only remove content after receiving takedown notices but also prevent similar content from ever being uploaded again. Essentially, it’s forcing platforms to deploy content filters that have proven notoriously unreliable in other contexts.

Innovation-chilling

Perhaps most concerning for the AI sector is how the NO FAKES Act targets the tools themselves. The revised bill wouldn’t just go after harmful content; it would potentially shut down entire development platforms and software tools that could be used to create unauthorised images.

This approach feels reminiscent of trying to ban word processors because someone might use one to write defamatory content. The bill includes some limitations (e.g. tools must be “primarily designed” for making unauthorised replicas or have limited other commercial uses) but these distinctions are notoriously subject to interpretation.

Small *** startups venturing into AI image generation could find themselves caught in expensive legal battles based on flimsy allegations long before they have a chance to establish themselves. Meanwhile, tech giants with armies of lawyers can better weather such storms, potentially entrenching their dominance.

Anyone who’s dealt with

This is the hidden content, please
’s ContentID system or similar copyright filtering tools knows how frustratingly imprecise they can be. These systems routinely flag legitimate content like musicians performing their own songs or creators using material under fair dealing provisions.

The NO FAKES Act would effectively mandate similar filtering systems across the internet. While it includes carve-outs for parody, satire, and commentary, enforcing these distinctions algorithmically has proven virtually impossible.

“These systems often flag things that are similar but not the same,” the EFF explains, “like two different people playing the same piece of public domain music.”

For smaller platforms without

This is the hidden content, please
-scale resources, implementing such filters could prove prohibitively expensive. The likely outcome? Many would simply over-censor to avoid legal risk.

In fact, one might expect major tech companies to oppose such sweeping regulation. However, many have remained conspicuously quiet. Some industry observers suggest this isn’t coincidental—established giants can more easily absorb compliance costs that would crush smaller competitors.

“It is probably not a coincidence that some of these very giants are okay with this new version of NO FAKES,” the EFF notes.

This pattern repeats throughout tech regulation history—what appears to be regulation reigning in Big Tech often ends up cementing their market position by creating barriers too costly for newcomers to overcome.

NO FAKES Act threatens anonymous speech

Tucked away in the legislation is another troubling provision that could expose anonymous internet users based on mere allegations. The bill would allow anyone to obtain a subpoena from a court clerk – without judicial review or evidence – forcing services to reveal identifying information about users accused of creating unauthorised replicas.

History shows such mechanisms are ripe for abuse. Critics with valid points can be unmasked and potentially harassed when their commentary includes screenshots or quotes from the very people trying to silence them.

This vulnerability could have a profound effect on legitimate criticism and whistleblowing. Imagine exposing corporate misconduct only to have your identity revealed through a rubber-stamp subpoena process.

This push for additional regulation seems odd given that Congress recently passed the Take It Down Act, which already targets images involving intimate or ******* content. That legislation itself raised privacy concerns, particularly around monitoring encrypted communications.

Rather than assess the impacts of existing legislation, lawmakers seem determined to push forward with broader restrictions that could reshape internet governance for decades to come.

The coming weeks will prove critical as the NO FAKES Act moves through the legislative process. For anyone who values internet freedom, innovation, and balanced approaches to emerging technology challenges, this bears close watching indeed.

(Photo by

This is the hidden content, please
)

See also:

This is the hidden content, please

This is the hidden content, please

Want to learn more about AI and big data from industry leaders? Check out

This is the hidden content, please
taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including
This is the hidden content, please
,
This is the hidden content, please
,
This is the hidden content, please
, and
This is the hidden content, please
.

Explore other upcoming enterprise technology events and webinars powered by TechForge

This is the hidden content, please
.

The post

This is the hidden content, please
appeared first on
This is the hidden content, please
.

This is the hidden content, please

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Vote for the server

    To vote for this server you must login.

    Jim Carrey Flirting GIF

  • Recently Browsing   0 members

    • No registered users viewing this page.

Important Information

Privacy Notice: We utilize cookies to optimize your browsing experience and analyze website traffic. By consenting, you acknowledge and agree to our Cookie Policy, ensuring your privacy preferences are respected.