Regulation
Washington State Wants AI Companies to Build Their Own Lie Detectors
Consumer Reports backs a bill requiring tech giants to offer free tools that identify AI-generated content, but warns of privacy loopholes that could turn detection into surveillance.

Consumer Reports backs a bill requiring tech giants to offer free tools that identify AI-generated content, but warns of privacy loopholes that could turn detection into surveillance.
When Washington state representative testified about HB 1170 last week, they framed it as consumer protection against deceptive AI content. The bill would force large AI providers—those with over 1 million users—to offer free detection tools and embed both visible and hidden markers in generated content. The approach resembles nutrition labels for synthetic media.
Consumer Reports threw its weight behind the legislation this week, but with significant caveats. The advocacy group wants lawmakers to close what it calls a "technically feasible" loophole that could let companies dodge detection requirements. They're also pushing for stricter privacy standards around how detection tools collect and use data—a concern that highlights the surveillance potential baked into any system designed to scan content at scale.
The bill treats violations as unfair trade practices, giving it enforcement teeth through existing consumer protection laws. Staff presenting to the House Committee on Technology, Economic Development & Veterans positioned it as aligned with emerging federal and EU standards, though those comparisons may be optimistic given the fragmented state of AI regulation globally.
CR's formal testimony, submitted January 14, draws parallels to California's SB 942 while requesting specific amendments. The group wants "manifest disclosures"—the visible labels users see—to explicitly state content is AI-generated rather than vague language about "synthetic" media. They're also concerned about licensing gaps that could let third-party developers skip compliance entirely.
Get the latest model rankings, product launches, and evaluation insights delivered to your inbox.
The technical challenges are real. During committee hearings, tech industry representatives argued that reliable detection remains elusive as models evolve. Watermarking schemes get stripped by compression. Statistical detection methods produce false positives. The bill's requirement for "latent" disclosures—hidden markers embedded in content—assumes a technical stability that doesn't yet exist.
Students testified in support, describing encounters with deepfakes and manipulated content on campus. Their testimony reflects growing anxiety about synthetic media in educational settings, though the bill's focus on providers with over 1 million users would exempt many academic and research tools.
The privacy implications run deeper than CR's testimony suggests. Any detection system requires analyzing content, which means building infrastructure to scan, flag, and potentially report synthetic media. The bill doesn't specify data retention limits or address cross-platform tracking possibilities.
The hearing covered companion legislation on AI chatbots and algorithmic discrimination, signaling Washington's broader regulatory ambitions. The state appears to be betting it can move faster than federal lawmakers, though that calculation assumes tech companies won't geo-fence features or pull services from non-compliant states.
Large AI providers with over 1 million users would need to offer free detection tools within months of the bill passing. Both visible labels and hidden technical markers would be mandatory for generated content, with violations triggering consumer protection enforcement and potential fines. Third-party licensing arrangements remain a regulatory gray zone, and privacy protections for detection tool data collection need strengthening according to consumer advocates.
The House committee continues reviewing amendments through January. If passed, Washington would join California in requiring AI content labeling, though implementation timelines and technical specifications remain fluid. Whether mandating companies to police their own outputs creates more problems than it solves remains an open question.


