MegatonMegaton
News
Leaderboards
Top Models
Reviews
Products
Megaton MaskMegaton Mesh
Megaton
Menu
News
Leaderboards
  • Top Models
Reviews
Products
  • Megaton Mask
  • Megaton Mesh
Loading...
#1
Kling
Kling 2.6
#1
Google
Veo 3
#3
Google
Veo 3.1
#4
Google
Veo 2
#4
PixVerse
PixVerse v5.5
Top Models
Kling
Kling 2.6
1rank
Google
Veo 3
1rank
Google
Veo 3.1
3rank
Google
Veo 2
4rank
PixVerse
PixVerse v5.5
4rank

Regulation

EU Proposes an "AI Icon" for Deepfakes as Mandatory Labeling Deadline Approaches

January 19, 2026|By Megaton AI

The European Commission's draft Code of Practice establishes a two-tier system for synthetic media transparency, requiring machine-readable "marking" and human-visible "labeling" by August 2026.

EU Proposes an "AI Icon" for Deepfakes as Mandatory Labeling Deadline Approaches
Share

EU Proposes "AI" Icon for Deepfakes as Mandatory Labeling Deadline Approaches

The European Commission's draft Code of Practice establishes a two-tier system for synthetic media transparency, requiring machine-readable "marking" and human-visible "labeling" by August 2026.

A standardized "AI" icon on every deepfake video across European platforms—that's what the European Commission's first draft Code of Practice envisions. Released this week for public feedback, the 47-page document proposes specific technical requirements for watermarking and labeling synthetic content. Comments are due by January 23.

The code creates a "multilayered approach" to AI transparency: providers must embed machine-readable watermarks into their models' outputs, while deployers—anyone using these tools to create content—must add visible labels for deepfakes and certain text outputs. This dual system addresses a core problem: making AI-generated content identifiable to both automated detection systems and human viewers before binding regulations take effect in August 2026.

"Marking" refers to technical watermarks embedded in AI outputs—metadata that detection tools can read but humans cannot see. "Labeling" means visible disclosures users encounter: an icon, a text overlay, or a disclaimer signaling synthetic origin. According to TechPolicy.Press's analysis, this distinction distributes responsibility across the AI supply chain.

The proposed "AI" icon would standardize disclosure across platforms, though the draft acknowledges implementation challenges. Video deepfakes would require persistent visual indicators during playback. Audio deepfakes need audible disclaimers. Text outputs discussing public interest topics—elections, health, safety—must include clear statements of synthetic origin.

Exceptions exist for satire, artistic expression, and educational content, according to Captain Compliance's guide to the new rules. These carve-outs come with conditions: creators must still disclose AI use, with more flexibility in presentation.

Subscribe to our newsletter

Get the latest model rankings, product launches, and evaluation insights delivered to your inbox.

The timing is deliberate. Article 50 of the EU AI Act requires transparency measures for synthetic content starting August 2, 2026. This voluntary code bridges current practices and those mandatory requirements, giving companies roughly seven months to implement systems before enforcement begins.

Open-weight models face particular scrutiny. The draft requires them to implement "structural markings"—watermarks built into the model architecture itself, according to Shibolet & Co.'s analysis. This poses technical challenges for models designed to be modified by users.

The code adopts "proportionality" for smaller companies. SMEs would have reduced requirements, though the draft does not specify exact thresholds or exemptions. Pearl Cohen notes that the Commission appears to be balancing comprehensive coverage with implementation feasibility.

The draft lacks specific penalties for non-compliance before August 2026, detailed technical specifications for watermarking methods, and clarity on cross-border enforcement. The Commission declined to provide additional details beyond the published draft.

Scalevise reports that parallel copyright provisions in the AI Act will require companies to publish summaries of their training data, creating additional transparency obligations beyond output labeling. These rules aim to protect creators' rights while ensuring users can distinguish human from synthetic content.

Video creators must prepare for persistent visual indicators on all deepfake content, including parody. Platform operators need detection systems capable of reading multiple watermark standards. Open-source model developers face architectural changes to embed permanent markings. Content using AI assistance rather than full synthesis requires different disclosure levels, and artistic or satirical exceptions still mandate some form of AI disclosure.

The Commission expects to finalize the code by May or June 2026, according to Creatives Unite, leaving a narrow window before mandatory compliance begins. Companies implementing early will shape the technical standards that could define synthetic media disclosure globally—the EU's rules often become de facto international standards, as GDPR demonstrated with privacy.

Related Articles
TechnologyFeb 2, 2026

Google's Project Genie: The Promise of Interactive Worlds to Explore

The experimental AI prototype generates playable 3D environments from text prompts, triggering a 15% gaming stock selloff.

Read more
TechnologyFeb 2, 2026

Rise of the Moltbots

A brief glimpse into an internet dominated by synthetic AI beings.

Read more
TechnologyJan 26, 2026

Adobe's Firefly Foundry: The bet on ethically trained AI

Major entertainment companies are building custom generative AI models trained exclusively on their own content libraries, as Adobe partners with Disney, CAA, and UTA to address the industry's copyright anxiety.

Read more
BusinessJan 23, 2026

Memory Prices Double as AI Eats the World's RAM Supply

Data centers will consume 70% of global memory production this year, leaving everyone else scrambling for scraps at premium prices.

Read more
Megaton

Building blockbuster video tools, infrastructure and evaluation systems for the AI era.

General Inquiriesgeneral@megaton.ai
Media Inquiriesmedia@megaton.ai
Advertising
Advertise on megaton.ai:sponsorships@megaton.ai
Address

Megaton Inc
1301 N Broadway STE 32199
Los Angeles, CA 90012

Product

  • Features

Company

  • Contact
  • Media

Legal

  • Terms
  • Privacy
  • Security
  • Cookies

© 2026 Megaton, Inc. All Rights Reserved.