MegatonMegaton
News
Leaderboards
Top Models
Reviews
Products
Megaton MaskMegaton Mesh
Megaton
Menu
News
Leaderboards
  • Top Models
Reviews
Products
  • Megaton Mask
  • Megaton Mesh
Loading...
#1
Kling
Kling 2.6
#1
Google
Veo 3
#3
Google
Veo 3.1
#4
Google
Veo 2
#4
PixVerse
PixVerse v5.5
Top Models
Kling
Kling 2.6
1rank
Google
Veo 3
1rank
Google
Veo 3.1
3rank
Google
Veo 2
4rank
PixVerse
PixVerse v5.5
4rank

Regulation

UK Police Chief Blames Microsoft Copilot for Fabricating Soccer Match in Fan Ban Decision

January 15, 2026|By Megaton AI

A parliamentary inquiry reveals West Midlands Police used an AI hallucination as evidence to ban Israeli soccer fans, sparking calls for the chief constable's resignation.

UK Police Chief Blames Microsoft Copilot for Fabricating Soccer Match in Fan Ban Decision
Share

A parliamentary inquiry reveals West Midlands Police used an AI hallucination as evidence to ban Israeli soccer fans, sparking calls for the chief constable's resignation.

The intelligence report sat on Chief Constable Craig Guildford's desk with a damning assessment: Maccabi Tel Aviv fans posed a "high risk" of disorder, citing past violence at a match against West Ham United. The match never happened. Microsoft's Copilot had invented it, and West Midlands Police had treated the fabrication as fact.

The revelation emerged during a parliamentary hearing this week after Guildford admitted to MPs that his force had relied on AI-generated intelligence to justify banning Israeli fans from attending a match in Birmingham. The confession contradicted his earlier testimony claiming officers had simply "Googled" the information, according to PC Gamer. Home Secretary Shabana Mahmood has since declared "no confidence" in Guildford's leadership, the Evening Standard reports.

The incident began when West Midlands Police needed to assess security risks for an upcoming match. According to Computing UK, officers turned to Microsoft Copilot to generate an intelligence briefing. The AI tool fabricated a non-existent fixture between West Ham and Maccabi Tel Aviv, complete with descriptions of fan disorder that never occurred. This phantom match became a cornerstone of the police assessment that Maccabi fans presented an elevated threat.

"This was a hallucination by Microsoft Copilot," Guildford told Parliament, using the technical term for when AI models generate plausible-sounding but false information. The chief constable apologized for what he called a "failure of leadership," according to Ynetnews.

The timeline reveals a pattern of obfuscation. When initially questioned about the intelligence sources, Guildford claimed officers had conducted standard web searches. Only under parliamentary scrutiny did he admit the force had relied on generative AI without verification protocols. The Evening Standard reports that an internal review found multiple inaccuracies in the intelligence report beyond the fabricated match, suggesting what investigators called "confirmation bias" in the assessment process.

Microsoft has not responded to requests for comment about Copilot's role in the incident, Windows Central notes.

Subscribe to our newsletter

Get the latest model rankings, product launches, and evaluation insights delivered to your inbox.

The controversy arrives as UK law enforcement agencies experiment with AI tools for intelligence gathering and threat assessment. Unlike structured databases or verified intelligence feeds, generative AI models like Copilot synthesize responses from training data that can include social media posts, news articles, and unverified sources. The result functions as a plausibility engine rather than a fact-checking system.

When asked for specific historical events, these models perform pattern matching. They can blend real incidents with fabricated details, producing outputs that appear authoritative but contain hallucinations. The term itself obscures the fundamental unreliability of using statistical language models for factual queries.

The West Midlands Police incident echoes earlier controversies around algorithmic decision-making in policing, from predictive systems that reinforce racial biases to facial recognition deployments that misidentify suspects. But this case marks a new threshold: a major police force acknowledging it based operational decisions on fabricated AI output.

The parliamentary inquiry has expanded beyond this single incident. MPs are now examining whether other UK police forces have integrated generative AI into intelligence workflows without proper oversight, according to WebProNews. The investigation comes as Microsoft markets Copilot to government agencies as a productivity tool, though the company's documentation warns against using it for critical decision-making without human verification.

For West Midlands Police, the damage extends beyond one erroneous fan ban. The force now faces questions about every intelligence assessment it has produced using AI tools. Guildford's admission has triggered what Slashdot describes as "significant safety and regulatory concerns" about unverified AI use in law enforcement.

Police forces using generative AI for intelligence reports may be building cases on fabricated evidence. Microsoft Copilot and similar tools can produce authoritative-sounding false information indistinguishable from real intelligence. Current UK regulations do not address generative AI use in policing decisions, and parliamentary oversight only caught this error after the fact, suggesting other incidents may go undetected. The hallucination problem is inherent to how large language models work, not a bug that can be patched.

The Home Office has announced a review of AI use across all UK police forces, though no timeline or scope has been specified. West Midlands Police continues using what it calls "digital tools" for intelligence gathering, with Guildford remaining in post despite calls for his resignation.

Related Articles
TechnologyFeb 2, 2026

Google's Project Genie: The Promise of Interactive Worlds to Explore

The experimental AI prototype generates playable 3D environments from text prompts, triggering a 15% gaming stock selloff.

Read more
TechnologyFeb 2, 2026

Rise of the Moltbots

A brief glimpse into an internet dominated by synthetic AI beings.

Read more
TechnologyJan 26, 2026

Adobe's Firefly Foundry: The bet on ethically trained AI

Major entertainment companies are building custom generative AI models trained exclusively on their own content libraries, as Adobe partners with Disney, CAA, and UTA to address the industry's copyright anxiety.

Read more
BusinessJan 23, 2026

Memory Prices Double as AI Eats the World's RAM Supply

Data centers will consume 70% of global memory production this year, leaving everyone else scrambling for scraps at premium prices.

Read more
Megaton

Building blockbuster video tools, infrastructure and evaluation systems for the AI era.

General Inquiriesgeneral@megaton.ai
Media Inquiriesmedia@megaton.ai
Advertising
Advertise on megaton.ai:sponsorships@megaton.ai
Address

Megaton Inc
1301 N Broadway STE 32199
Los Angeles, CA 90012

Product

  • Features

Company

  • Contact
  • Media

Legal

  • Terms
  • Privacy
  • Security
  • Cookies

© 2026 Megaton, Inc. All Rights Reserved.