Regulation
EU Proposes an "AI Icon" for Deepfakes as Mandatory Labeling Deadline Approaches
The European Commission's draft Code of Practice establishes a two-tier system for synthetic media transparency, requiring machine-readable "marking" and human-visible "labeling" by August 2026.

EU Proposes "AI" Icon for Deepfakes as Mandatory Labeling Deadline Approaches
The European Commission's draft Code of Practice establishes a two-tier system for synthetic media transparency, requiring machine-readable "marking" and human-visible "labeling" by August 2026.
A standardized "AI" icon on every deepfake video across European platforms—that's what the European Commission's first draft Code of Practice envisions. Released this week for public feedback, the 47-page document proposes specific technical requirements for watermarking and labeling synthetic content. Comments are due by January 23.
The code creates a "multilayered approach" to AI transparency: providers must embed machine-readable watermarks into their models' outputs, while deployers—anyone using these tools to create content—must add visible labels for deepfakes and certain text outputs. This dual system addresses a core problem: making AI-generated content identifiable to both automated detection systems and human viewers before binding regulations take effect in August 2026.
"Marking" refers to technical watermarks embedded in AI outputs—metadata that detection tools can read but humans cannot see. "Labeling" means visible disclosures users encounter: an icon, a text overlay, or a disclaimer signaling synthetic origin. According to TechPolicy.Press's analysis, this distinction distributes responsibility across the AI supply chain.
The proposed "AI" icon would standardize disclosure across platforms, though the draft acknowledges implementation challenges. Video deepfakes would require persistent visual indicators during playback. Audio deepfakes need audible disclaimers. Text outputs discussing public interest topics—elections, health, safety—must include clear statements of synthetic origin.
Exceptions exist for satire, artistic expression, and educational content, according to Captain Compliance's guide to the new rules. These carve-outs come with conditions: creators must still disclose AI use, with more flexibility in presentation.
Get the latest model rankings, product launches, and evaluation insights delivered to your inbox.
The timing is deliberate. Article 50 of the EU AI Act requires transparency measures for synthetic content starting August 2, 2026. This voluntary code bridges current practices and those mandatory requirements, giving companies roughly seven months to implement systems before enforcement begins.
Open-weight models face particular scrutiny. The draft requires them to implement "structural markings"—watermarks built into the model architecture itself, according to Shibolet & Co.'s analysis. This poses technical challenges for models designed to be modified by users.
The code adopts "proportionality" for smaller companies. SMEs would have reduced requirements, though the draft does not specify exact thresholds or exemptions. Pearl Cohen notes that the Commission appears to be balancing comprehensive coverage with implementation feasibility.
The draft lacks specific penalties for non-compliance before August 2026, detailed technical specifications for watermarking methods, and clarity on cross-border enforcement. The Commission declined to provide additional details beyond the published draft.
Scalevise reports that parallel copyright provisions in the AI Act will require companies to publish summaries of their training data, creating additional transparency obligations beyond output labeling. These rules aim to protect creators' rights while ensuring users can distinguish human from synthetic content.
Video creators must prepare for persistent visual indicators on all deepfake content, including parody. Platform operators need detection systems capable of reading multiple watermark standards. Open-source model developers face architectural changes to embed permanent markings. Content using AI assistance rather than full synthesis requires different disclosure levels, and artistic or satirical exceptions still mandate some form of AI disclosure.
The Commission expects to finalize the code by May or June 2026, according to Creatives Unite, leaving a narrow window before mandatory compliance begins. Companies implementing early will shape the technical standards that could define synthetic media disclosure globally—the EU's rules often become de facto international standards, as GDPR demonstrated with privacy.


