6/recent/ticker-posts

No Fakes Act - Online Censorship?

 

Are our freedom of expression and access to Video AI tools under threat?

The Electronic Frontier Foundation (EFF) says the revised version of the NO FAKES Act, reintroduced in April, would be a disaster for free speech and innovation if signed into law.
The bill aims to address valid concerns about AI-generated audio and visual replicas of people in the wrong way, the EFF argues, by establishing a new intellectual property right over one's image, rather than a privacy right that would deter unauthorized AI replicas, among other purported flaws.

The new right would incentivize a market for monetizing the simulation of dead celebrities and would lead to more litigation, the EFF argues.

And the associated takedown requirement, the rights group claims, would put the distribution of AI software tools at risk, would encourage overly broad content removal, would make unmasking online content creators easier, and would cement the power of incumbents with compliance costs that startups couldn't easily afford.

Critics fear the revised NO FAKES Act has morphed from targeted AI deepfakes protection into sweeping censorship powers.

What began as a seemingly reasonable attempt to tackle AI-generated deepfakes has snowballed into something far more troubling, according to digital rights advocates. The much-discussed Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act – originally aimed at preventing unauthorised digital replicas of people – now threatens to fundamentally alter how the internet functions.

The bill’s expansion has set alarm bells ringing throughout the tech community. It’s gone well beyond simply protecting celebrities from fake videos to potentially creating a sweeping censorship framework. 

 

From sensible safeguards to sledgehammer approach


The initial idea wasn’t entirely misguided: to create protections against AI systems generating fake videos of real people without permission. We’ve all seen those unsettling deepfakes circulating online.

But rather than crafting narrow, targeted measures, lawmakers have opted for what the Electronic Frontier Foundation calls a “federalised image-licensing system” that goes far beyond reasonable protections.

“The updated bill doubles down on that initial mistaken approach,” the EFF notes, “by mandating a whole new censorship infrastructure for that system, encompassing not just images but the products and services used to create them.”

What’s particularly worrying is the NO FAKES Act’s requirement for nearly every internet platform to implement systems that would not only remove content after receiving takedown notices but also prevent similar content from ever being uploaded again. Essentially, it’s forcing platforms to deploy content filters that have proven notoriously unreliable in other contexts. 

 Mandatory Takedown Systems: Internet platforms would be required to remove content upon notice and implement filters to prevent re-uploads, similar to flawed copyright filters.


Tool Targeting: The act targets tools used to create unauthorized images, potentially stifling innovation by giving rights-holders veto power over new technologies.


Weak Safeguards: The notice and takedown system lacks sufficient safeguards, leading to potential over-censorship and abuse. Carve-outs for parody and satire offer little comfort to those unable to afford legal challenges.

 

NO FAKES Act threatens anonymous speech


Tucked away in the legislation is another troubling provision that could expose anonymous internet users based on mere allegations. The bill would allow anyone to obtain a subpoena from a court clerk – without judicial review or evidence – forcing services to reveal identifying information about users accused of creating unauthorised replicas.

History shows such mechanisms are ripe for abuse. Critics with valid points can be unmasked and potentially harassed when their commentary includes screenshots or quotes from the very people trying to silence them.

This vulnerability could have a profound effect on legitimate criticism and whistleblowing. Imagine exposing corporate misconduct only to have your identity revealed through a rubber-stamp subpoena process.

This push for additional regulation seems odd given that Congress recently passed the Take It Down Act, which already targets images involving intimate or sexual content. That legislation itself raised privacy concerns, particularly around monitoring encrypted communications.

Rather than assess the impacts of existing legislation, lawmakers seem determined to push forward with broader restrictions that could reshape internet governance for decades to come.

The coming weeks will prove critical as the NO FAKES Act moves through the legislative process. For anyone who values internet freedom, innovation, and balanced approaches to emerging technology challenges, this bears close watching indeed. 

Source 

It seems people may soon be unable to create humorous video shorts featuring the likenesses of famous individuals—raising serious concerns about freedom of expression.
Fan-made videos inspired by iconic franchises like Star Wars, Star Trek, and Marvel may also be at risk, as major studios are beginning to sue AI companies like Midjourney too. 

Post a Comment

0 Comments

Ad Code

Responsive Advertisement