MegatonMegaton
News
Leaderboards
Top Models
Reviews
Products
Megaton MaskMegaton Mesh
Megaton
Menu
News
Leaderboards
  • Top Models
Reviews
Products
  • Megaton Mask
  • Megaton Mesh
Loading...
#1
Kling
Kling 2.6
#1
Google
Veo 3
#3
Google
Veo 3.1
#4
Google
Veo 2
#4
PixVerse
PixVerse v5.5
Top Models
Kling
Kling 2.6
1rank
Google
Veo 3
1rank
Google
Veo 3.1
3rank
Google
Veo 2
4rank
PixVerse
PixVerse v5.5
4rank

Regulation

Senate Passes Bill Letting Deepfake Victims Sue for Up to $250,000

January 16, 2026|By Megaton AI

The DEFIANCE Act cleared the Senate unanimously, creating a federal right to sue creators and distributors of nonconsensual sexually explicit AI forgeries.

Senate Passes Bill Letting Deepfake Victims Sue for Up to $250,000
Share

The DEFIANCE Act cleared the Senate unanimously, creating a federal right to sue creators and distributors of nonconsensual sexually explicit AI forgeries.

The U.S. Senate passed legislation giving victims of deepfake abuse a direct path to federal court. The DEFIANCE Act, which passed without a single objection on January 13, allows people whose likenesses appear in nonconsensual sexually explicit AI-generated content to sue creators and platforms for damages starting at $150,000.

According to The 19th, momentum for the bill accelerated after reports that xAI's Grok chatbot was generating nonconsensual intimate imagery of real people—a capability that emerged as generative AI tools have become capable of producing convincing forgeries in seconds. The Senate's unanimous vote suggests even divided lawmakers recognize that existing laws haven't kept pace with how easily someone can now manufacture explicit content of anyone with a public photo.

The bill, sponsored by Sen. Dick Durbin with co-sponsor Sen. Lindsey Graham, fills a specific gap. While Congress criminalized the creation and distribution of such content through the Take It Down Act in 2025, victims had no civil remedy—no way to seek damages directly from perpetrators. The DEFIANCE Act changes that.

Victims can sue for a minimum of $150,000 in federal court. According to PCMag, that figure jumps to $250,000 if the deepfake is linked to harassment or stalking. The bill targets both creators and hosts, meaning platforms that knowingly keep such content online could face liability alongside the original perpetrator.

Subscribe to our newsletter

Get the latest model rankings, product launches, and evaluation insights delivered to your inbox.

"These victims deserve their day in court," Durbin stated, according to the Washington Times, emphasizing the psychological and reputational harm caused by digital forgeries.

Tech Policy Press notes that controversies involving Grok underscore why civil liability matters—criminal penalties alone haven't deterred bad actors, particularly when AI tools can generate explicit content faster than platforms can remove it. The phenomenon has a name in harm-reduction circles: the "liar's dividend," where the mere possibility of AI manipulation erodes trust in all media, authentic or forged.

The bill's scope appears carefully calibrated. It addresses "intimate" digital forgeries—sexually explicit content—rather than all deepfakes. This narrower focus likely helped it pass unanimously, avoiding broader First Amendment debates about political satire or artistic expression.

Enforcement presents challenges. Identifying anonymous creators remains difficult. Platforms will likely argue they can't review every piece of content. And the international nature of the internet means many perpetrators operate beyond U.S. jurisdiction.

The House must now decide whether to take up the legislation. Given the Senate's unanimous support and the bill's bipartisan sponsorship, passage seems likely—though the timeline remains uncertain. Victims gain federal standing to sue for $150,000 minimum in damages, platforms face potential liability for knowingly hosting nonconsensual content, and both creators and distributors can be held liable under the new framework. Damages increase to $250,000 when linked to harassment or stalking campaigns.

What happens next depends on the House's legislative calendar and whether tech platforms mobilize opposition. The Senate's unanimous vote signals broad agreement that current protections have failed to keep pace with generative AI's capacity for harm. Several states have already passed their own legislation. The open question is whether federal civil liability will change the risk calculation for platforms that have, until now, treated nonconsensual intimate content as a moderation problem rather than a legal exposure.

Related Articles
TechnologyFeb 2, 2026

Google's Project Genie: The Promise of Interactive Worlds to Explore

The experimental AI prototype generates playable 3D environments from text prompts, triggering a 15% gaming stock selloff.

Read more
TechnologyFeb 2, 2026

Rise of the Moltbots

A brief glimpse into an internet dominated by synthetic AI beings.

Read more
TechnologyJan 26, 2026

Adobe's Firefly Foundry: The bet on ethically trained AI

Major entertainment companies are building custom generative AI models trained exclusively on their own content libraries, as Adobe partners with Disney, CAA, and UTA to address the industry's copyright anxiety.

Read more
BusinessJan 23, 2026

Memory Prices Double as AI Eats the World's RAM Supply

Data centers will consume 70% of global memory production this year, leaving everyone else scrambling for scraps at premium prices.

Read more
Megaton

Building blockbuster video tools, infrastructure and evaluation systems for the AI era.

General Inquiriesgeneral@megaton.ai
Media Inquiriesmedia@megaton.ai
Advertising
Advertise on megaton.ai:sponsorships@megaton.ai
Address

Megaton Inc
1301 N Broadway STE 32199
Los Angeles, CA 90012

Product

  • Features

Company

  • Contact
  • Media

Legal

  • Terms
  • Privacy
  • Security
  • Cookies

© 2026 Megaton, Inc. All Rights Reserved.