Technology
California's New AI Safety Law Collides With Trump Deregulation Push
A month-old transparency mandate for frontier AI models faces federal preemption as developers scramble to comply with conflicting requirements.

A month-old transparency mandate for frontier AI models faces federal preemption as developers scramble to comply with conflicting requirements.
On January 1, California's Transparency in Frontier AI Act went live, requiring any developer training models on more than 10^26 floating-point operations to publish detailed safety frameworks and report incidents to the state. The law, known as SB 53, affects OpenAI, Anthropic, Google, and Meta, all of whom maintain California operations. The Trump administration has already directed the Attorney General to challenge the law as inconsistent with a new federal policy of minimally burdensome AI regulation.
California passed SB 53 last year to fill what the Brookings Institution calls a federal regulatory vacuum on AI safety. That vacuum has been replaced by active federal opposition to state-level mandates, creating potential preemption challenges that could invalidate the law entirely. The law requires high-level summaries of training datasets, forcing transparency about copyrighted works, licensed data, and public domain material. Training data disclosures could expose previously opaque licensing deals and scraping practices, potentially revealing that video models trained on online content lack proper licensing, or that premium datasets thought to be exclusive are actually shared across competitors. This transparency could trigger both copyright litigation and competitive intelligence gathering as companies reverse-engineer rivals' data strategies or mire the frontier AI Labs in years long litigation.
The law also mandates companies to maintain unredacted safety frameworks focused on preventing catastrophic risks to help regulators enforce non compliance.
Get the latest model rankings, product launches, and evaluation insights delivered to your inbox.
While SB 53 and AB 2013 are now operational, the consumer-facing portions of California's AI transparency push have been delayed. SB 942's watermarking requirements, which would mandate detection tools and latent disclosures for AI-generated video, won't take effect until August 2, 2026, giving the industry six more months to develop technical implementation. The August 2 watermarking deadline creates a technical fait accompli. If California successfully implements detection standards before federal preemption, those standards become the de facto national requirement since video platforms can't maintain separate technical architectures for different jurisdictions.
The federal challenge appears to be moving quickly. The administration has already begun identifying state regulations deemed inconsistent with national policy following the revocation of the Biden AI executive order. The Attorney General's office declined to comment on specific litigation plans. Currently AI developers must simultaneously navigate SB 53's safety requirements, AB 2013's training data disclosure mandate, and the looming California AI Transparency Act watermarking requirements set for August 2.
Smaller developers below the 10^26 FLOPS threshold gain competitive advantage while larger players navigate compliance. This advantage may be temporary. If federal preemption succeeds, the regulatory burden disappears for everyone. If California prevails, smaller players will eventually hit the threshold as compute costs fall and model sizes grow, inheriting the same compliance costs that currently handicap their larger competitors. Furthermore, the 10^26 FLOPS benchmark may prove to be the wrong metric to measure AI model capability as new architectures and frameworks come onelin.
Developers have until March 1 to submit their first quarterly incident reports under SB 53. Companies that file reports provide ammunition for federal preemption arguments while those that don't face California enforcement actions that could survive even if the broader law is eventually struck down. Whether those reports ever see meaningful regulatory review depends on federal court speed, but the act of filing them creates discoverable records that could surface in future litigation, making compliance a permanent liability regardless of preemption outcomes.
The watermarking requirements scheduled for August could become the real battleground. Technical standards are harder to preempt than reporting requirements, and California has six months to entrench them before any federal challenge concludes. Once video platforms implement detection infrastructure, the costs of reverting make federal preemption practically meaningless.