Regulation
Publishers Seek Court Sanctions Against OpenAI for Deleting Evidence
Major news organizations allege OpenAI destroyed millions of ChatGPT logs that were critical to their copyright infringement case, asking a federal judge to instruct jurors to assume the worst.

Major news organizations allege OpenAI destroyed millions of ChatGPT logs that were critical to their copyright infringement case, asking a federal judge to instruct jurors to assume the worst.
The Denver Post's lawyers discovered something was missing. Where 20 million ChatGPT conversation logs should have existed—logs a federal judge had explicitly ordered OpenAI to preserve and produce—many had vanished. According to court filings from January 8, OpenAI allegedly substituted different logs after deleting the originals, leaving publishers scrambling to prove their central claim: that ChatGPT reproduces and distorts their copyrighted reporting.
The dispute has become a test case for how courts will handle evidence preservation in AI litigation, where the primary proof of infringement exists as ephemeral chat logs that companies control entirely. The publishers—including The New York Times, Tribune Publishing, and MediaNews Group—are now asking for what lawyers call an "adverse inference instruction": telling a future jury to assume the deleted evidence would have damaged OpenAI's defense.
The timeline matters. On January 5, according to Bloomberg Law, a federal judge affirmed a magistrate's order requiring OpenAI to produce 20 million anonymized ChatGPT logs. The court rejected OpenAI's privacy arguments, ruling that the relevance to copyright claims outweighed those concerns. Within days, the publishers filed their sanctions motion, alleging the ordered logs had been deleted.
"This spoliation of evidence severely hampers their ability to prove the AI models are stealing and distorting their reporting," the Colorado Freedom of Information Coalition reported, summarizing the publishers' position. The word "spoliation"—legal terminology for destroying evidence—carries weight in federal court. It can trigger anything from monetary sanctions to instructions that devastate a defendant's case.
The publishers claim more than accidental deletion. According to MediaPost's January 11 report, they allege OpenAI "destroyed critical evidence regarding copyright infringement despite court orders to preserve it." The distinction between negligent and intentional destruction could determine whether the judge issues sanctions.
Get the latest model rankings, product launches, and evaluation insights delivered to your inbox.
ChatGPT logs represent the primary method for proving whether the model reproduces copyrighted text verbatim—the kind of smoking gun that copyright plaintiffs typically need. Unlike traditional copyright cases where the allegedly infringing work exists as a fixed copy, AI outputs are generated dynamically and then often disappear unless specifically preserved.
The technical architecture of ChatGPT complicates matters. Each conversation exists as a unique session that OpenAI controls on its servers. Users can delete their own chat histories, but the company maintains backend logs for various purposes—or should, when under a preservation order.
Editor and Publisher characterized this as "a significant escalation of the copyright infringement battle," noting that the publishers have accused OpenAI of contempt. Contempt of court is a serious charge that could result in daily fines until compliance is achieved.
OpenAI's response, or lack thereof, becomes part of the story. None of the source materials indicate the company has publicly addressed these specific allegations of evidence destruction. The absence of a vigorous public defense might suggest their legal team is keeping arguments confined to court filings.
The precedent extends beyond news publishers. Every company training AI models on potentially copyrighted data is watching how courts handle these evidence preservation requirements. If judges start assuming the worst when logs go missing, it could change how AI companies architect their systems.
Courts may require AI companies to implement litigation hold systems that prevent any deletion once lawsuits are filed. Publishers could gain leverage in settlement negotiations if judges show willingness to issue adverse inference instructions. Future AI systems might need to build in evidence preservation features from the start. The cost and complexity of AI litigation could increase if extensive log preservation becomes standard. Other creative industries watching this case may accelerate their own copyright claims against AI companies.
The judge's decision on sanctions could arrive within weeks, potentially before any trial begins. If the court agrees that OpenAI destroyed evidence, the instruction to the jury would be direct: assume those deleted logs contained exactly what the publishers claim they contained.


