Regulation
UK Police Chief Blames Microsoft Copilot for Fabricating Soccer Match in Fan Ban Decision
A parliamentary inquiry reveals West Midlands Police used an AI hallucination as evidence to ban Israeli soccer fans, sparking calls for the chief constable's resignation.

A parliamentary inquiry reveals West Midlands Police used an AI hallucination as evidence to ban Israeli soccer fans, sparking calls for the chief constable's resignation.
The intelligence report sat on Chief Constable Craig Guildford's desk with a damning assessment: Maccabi Tel Aviv fans posed a "high risk" of disorder, citing past violence at a match against West Ham United. The match never happened. Microsoft's Copilot had invented it, and West Midlands Police had treated the fabrication as fact.
The revelation emerged during a parliamentary hearing this week after Guildford admitted to MPs that his force had relied on AI-generated intelligence to justify banning Israeli fans from attending a match in Birmingham. The confession contradicted his earlier testimony claiming officers had simply "Googled" the information, according to PC Gamer. Home Secretary Shabana Mahmood has since declared "no confidence" in Guildford's leadership, the Evening Standard reports.
The incident began when West Midlands Police needed to assess security risks for an upcoming match. According to Computing UK, officers turned to Microsoft Copilot to generate an intelligence briefing. The AI tool fabricated a non-existent fixture between West Ham and Maccabi Tel Aviv, complete with descriptions of fan disorder that never occurred. This phantom match became a cornerstone of the police assessment that Maccabi fans presented an elevated threat.
"This was a hallucination by Microsoft Copilot," Guildford told Parliament, using the technical term for when AI models generate plausible-sounding but false information. The chief constable apologized for what he called a "failure of leadership," according to Ynetnews.
The timeline reveals a pattern of obfuscation. When initially questioned about the intelligence sources, Guildford claimed officers had conducted standard web searches. Only under parliamentary scrutiny did he admit the force had relied on generative AI without verification protocols. The Evening Standard reports that an internal review found multiple inaccuracies in the intelligence report beyond the fabricated match, suggesting what investigators called "confirmation bias" in the assessment process.
Microsoft has not responded to requests for comment about Copilot's role in the incident, Windows Central notes.
Get the latest model rankings, product launches, and evaluation insights delivered to your inbox.
The controversy arrives as UK law enforcement agencies experiment with AI tools for intelligence gathering and threat assessment. Unlike structured databases or verified intelligence feeds, generative AI models like Copilot synthesize responses from training data that can include social media posts, news articles, and unverified sources. The result functions as a plausibility engine rather than a fact-checking system.
When asked for specific historical events, these models perform pattern matching. They can blend real incidents with fabricated details, producing outputs that appear authoritative but contain hallucinations. The term itself obscures the fundamental unreliability of using statistical language models for factual queries.
The West Midlands Police incident echoes earlier controversies around algorithmic decision-making in policing, from predictive systems that reinforce racial biases to facial recognition deployments that misidentify suspects. But this case marks a new threshold: a major police force acknowledging it based operational decisions on fabricated AI output.
The parliamentary inquiry has expanded beyond this single incident. MPs are now examining whether other UK police forces have integrated generative AI into intelligence workflows without proper oversight, according to WebProNews. The investigation comes as Microsoft markets Copilot to government agencies as a productivity tool, though the company's documentation warns against using it for critical decision-making without human verification.
For West Midlands Police, the damage extends beyond one erroneous fan ban. The force now faces questions about every intelligence assessment it has produced using AI tools. Guildford's admission has triggered what Slashdot describes as "significant safety and regulatory concerns" about unverified AI use in law enforcement.
Police forces using generative AI for intelligence reports may be building cases on fabricated evidence. Microsoft Copilot and similar tools can produce authoritative-sounding false information indistinguishable from real intelligence. Current UK regulations do not address generative AI use in policing decisions, and parliamentary oversight only caught this error after the fact, suggesting other incidents may go undetected. The hallucination problem is inherent to how large language models work, not a bug that can be patched.
The Home Office has announced a review of AI use across all UK police forces, though no timeline or scope has been specified. West Midlands Police continues using what it calls "digital tools" for intelligence gathering, with Guildford remaining in post despite calls for his resignation.


