Regulation
Washington State Weighs AI Disclosure Rules as Consumer Groups Push for Stronger Protections
Consumer Reports warns proposed legislation contains critical loopholes that could undermine transparency requirements for synthetic media.

Consumer Reports warns proposed legislation contains critical loopholes that could undermine transparency requirements for synthetic media.
When Washington legislators gathered last week to debate HB 1170, a bill requiring AI companies to label synthetic content, they faced a familiar challenge: how to make disclosure rules that actually work. The proposed law would mandate that AI providers with over 1 million monthly users include both visible watermarks and hidden metadata in generated content, plus offer free detection tools to consumers.
Consumer Reports, testifying before the House Committee on Technology, Economic Development & Veterans, identified several gaps that could render these protections meaningless. The advocacy group's analysis reveals how even well-intentioned transparency laws can fail when they meet the messy reality of AI deployment.
The bill's core mechanism seems straightforward. AI systems would need to embed two types of disclosure: manifest (visible watermarks or labels) and latent (metadata readable by detection tools). Companies would also have to provide free tools for consumers to verify whether content is synthetic.
CR's testimony exposed a critical flaw in the licensing provision. The bill requires third-party licensees to have the "capability" to include disclosures—not to actually use them. "This creates an enforcement nightmare," the organization noted in its formal letter to legislators. A company could technically comply while never actually labeling a single piece of generated content.
The detection tools requirement raises its own concerns. CR warned that these tools could become surveillance infrastructure, potentially exposing consumer data about what content people are checking. The organization recommended explicit privacy protections, noting that California's similar SB 942 lacks these safeguards.
Get the latest model rankings, product launches, and evaluation insights delivered to your inbox.
Product exemptions present another vulnerability. The bill's current language could allow companies to claim broad exceptions for "products" versus "services," potentially exempting entire categories of AI applications from disclosure requirements.
Washington's bill closely follows California's SB 942, passed last year, suggesting states are racing to establish frameworks before federal action. The House committee scheduled an executive session for January 16, with members planning a technical work session to address feasibility concerns.
CR's comparison to California's law revealed both states struggling with the same fundamental tension: making rules specific enough to be enforceable but flexible enough to cover rapidly evolving technology. Neither state has solved the detection tool privacy problem.
The committee heard from various stakeholders during public hearings, with proponents arguing the bill provides necessary consumer protection tools. House Republicans noted the measure targets consumer-facing transparency rather than restricting AI development itself.
The enforcement mechanism remains unclear. Without specifying how violations would be detected or penalized, the bill risks becoming what CR implicitly suggests: a capability requirement that never translates to actual disclosure.
AI providers would need to watermark content for systems with 1M+ monthly users. Third-party licensees could comply without actually implementing disclosures. Detection tools might expose consumer data about content verification behavior. Product exemptions could create massive loopholes for entire AI categories. Washington follows California's model but has not fixed its core weaknesses.
The committee's technical work session will determine whether these concerns get addressed before the bill advances. CR's testimony suggests the current draft would create an illusion of transparency: companies possessing disclosure capabilities they never deploy, detection tools that surveil users, and exemptions wide enough to exclude major AI applications. The question is whether those rules will mean anything once the exemptions and loopholes take effect.


