#1
ByteDanceSeedance 2.0 Pro
#2
KlingKling 3 Pro
#3
KlingKling 2.6
#3
GoogleVeo 3
#5
xaiGrok Imagine 1.0

Technology

California AG Builds AI Enforcement Unit While Pressing xAI on Explicit Images

February 20, 2026|By Megaton Editorial

The state's top prosecutor creates dedicated oversight team as investigation into Musk's Grok chatbot intensifies over non-consensual sexual content generation.

California AG Builds AI Enforcement Unit While Pressing xAI on Explicit Images
Share

The state's top prosecutor creates dedicated oversight team as investigation into Musk's Grok chatbot intensifies over non-consensual sexual content generation.

California Attorney General Rob Bonta sent a cease-and-desist letter to xAI this week demanding confirmation that Grok, the company's chatbot, has stopped generating non-consensual sexually explicit images. The AG's office says the capability may have produced illegal content depicting both adults and minors. The enforcement action, revealed Tuesday alongside plans for a new AI accountability unit within the Department of Justice, marks California's most aggressive move yet to regulate AI systems through existing consumer protection laws rather than wait for federal action.

The timing appears strategic. With federal AI regulation stalled in congressional gridlock, California is positioning itself as the de facto national watchdog, a role it has played before with privacy laws and emissions standards. This time, the state is targeting specific technical capabilities rather than broad business practices, suggesting a new enforcement playbook that will influence how AI companies design their systems.

According to multiple outlets, Bonta's office has been investigating xAI since discovering that Grok could generate explicit images without the consent checks that competitors like OpenAI and Anthropic have implemented. The AI Innovator reports that the AG criticized xAI for deflecting responsibility despite claims of implementing safeguards.

The investigation centers on a particularly thorny technical problem: how to prevent image generation models from creating non-consensual sexual content while preserving legitimate artistic uses. Most major AI companies have opted for heavy-handed filtering that blocks entire categories of prompts. xAI appears to have taken a more permissive approach, though the company has not publicly detailed its safety measures.

"Stopping future harm does not absolve companies of liability for past actions," Bonta stated, according to News.az, signaling that the investigation could extend beyond a simple cease-and-desist to potential penalties for content already generated.

The new AI accountability unit will operate within the existing DOJ structure, focusing on monitoring AI systems for safety and legal compliance. Unlike proposed federal frameworks that would create new regulatory categories, California's approach leverages existing consumer protection and criminal statutes. This strategy could prove faster to implement but harder for companies to predict.

Subscribe to our newsletter

Get the latest model rankings, product launches, and evaluation insights delivered to your inbox.

This mirrors California's previous tech regulation efforts. When federal privacy legislation stalled, the state passed CCPA. When emissions standards lagged, California set its own. Now, with AI regulation similarly gridlocked at the federal level, the state is again filling the void, this time with enforcement actions rather than new legislation.

The xAI investigation also highlights a growing divide in the AI industry over content moderation. While OpenAI and Anthropic have implemented extensive filtering systems that sometimes frustrate legitimate users, newer entrants like xAI have marketed themselves as less restrictive alternatives. That positioning may now carry legal risk.

Bonta emphasized that federal regulatory gridlock necessitates state action, according to CNA. The AG's office intends to establish California as a primary AI watchdog in the absence of comprehensive federal rules, TrustFinance reports.

The investigation raises concerns about retroactive liability. If xAI's systems generated illegal content before implementing safeguards, the company could face penalties for past violations even if current systems are compliant. This approach, holding companies accountable for their models' entire operational history, could fundamentally alter how AI companies approach product launches.

California's enforcement strategy bypasses the need for new AI-specific legislation by applying existing consumer protection and criminal laws to AI systems. Companies marketing uncensored or less restrictive AI models may face heightened legal scrutiny for content their systems generate. The state's approach could create a patchwork of enforcement actions that vary by jurisdiction, complicating compliance for AI companies operating nationally. Retroactive liability for past model outputs introduces new risks for companies that iterate quickly on safety measures. The investigation's focus on technical capabilities rather than business practices suggests regulators are developing more sophisticated understanding of AI systems.

The AG's office has not disclosed a timeline for the xAI investigation or when the new accountability unit will be fully operational. Global Leaders Insights reports the unit will monitor AI systems broadly, not just focus on content generation, suggesting California's enforcement appetite extends well beyond the current xAI probe.

Whether xAI has actually modified Grok's capabilities in response to the cease-and-desist remains unclear. The company may be challenging the AG's authority to regulate AI-generated content. xAI has not responded publicly to the investigation, and Bonta's demand for confirmation suggests ongoing uncertainty about Grok's current state. If California succeeds in forcing modifications to a major AI model through existing law enforcement tools, it could trigger a wave of similar state-level actions, creating the regulatory fragmentation the tech industry has long feared.