When the Supreme Court was experimenting last week with an application of AI (generative video), there is no question that the entire legal community is being forced to confront its own realities with respect to AI. For the first time, a combination of actual Supreme Court audio and AI-generated images were used in conjunction with one another in connection with the On the Docket Project. This marked an unprecedented use of AI by an organization which has prohibited cameras in the courtroom since its inception. As a result of the growing number of inquiries regarding appropriate uses of AI, federal judges continue to establish clear lines of demarcation between AI training data and the output(s) of AIs in copyright infringement actions, requiring plaintiffs to provide substantial evidence that the output(s) created by AI substantially mirror copyrighted works. Moreover, courts also are now considering whether or not the plaintiff suffered measurable economic injury, moving away from simply debating how models are trained.
The primary purpose behind the On the Docket project is to utilize AI-generated video to increase public awareness and interest in Supreme Court proceedings. In particular, the project utilizes AI-generated visuals to fill the void left by the absence of officially recorded video of oral arguments. Oral arguments represent the point during the proceeding when justices are able to express their thoughts and reasoning with respect to a particular issue through the asking of pointed questions. While AI-generated visuals are intended to help bridge this information gap, they do so in a manner that is interpretive rather than documentary in nature.
The use of AI-generated visuals to depict courtroom proceedings represents a unique moment within the history of judicial innovation. The Supreme Court has previously utilized new forms of media to record and release its proceedings, including stenography and audio recording. Each time the Court did so, it wrestled with concerns related to accuracy.
However, prior to this decision, the legal community had more pressing issues related to AI. Specifically, some attorneys submitted briefs generated by AI that included entirely fictitious references to previous case law. Additionally, many jurisdictions began requiring attorneys to include certifications that their submissions were not generated by AI.
Industry lobbying likely played a role in shaping regulatory responses to AI. For instance, proposed amendments to the General Data Protection Regulation ("GDPR") would allow for certain types of special categories of personal data to be used for AI training purposes. This represents a significant concession to tech companies.
In addition, current safety architecture for AI relies on an incorrect assumption: that harmful content can be effectively filtered at the boundary of the model. Cognitive psychologists argue that effective safety requires an additional inhibitory layer above the generator, which would make filtering easier to bypass than a filter located at the model's boundary, according to analysis found on Medium.
Get the latest model rankings, product launches, and evaluation insights delivered to your inbox.
These tensions were explored further by Dr. Shazeda Ahmed in her appearance on the Overthink Podcast. She discussed who gets to define the conversation around AI safety. Many of the discussions regarding AI safety tend to focus on either AI "doomers" or "utopians," but rarely address more practical issues such as the loss of jobs due to automation.
Work related to AI safety goes far beyond working in engineering-related roles. Skills such as communications, policy development, field building and operations management are becoming increasingly important for organizations working on AI safety, especially those involved in governance and electoral politics, according to 80,000 Hours. Therefore, AI safety concerns appear to have transitioned from solely technical problems to institutional problems.

For example, computer vision-based AI designed to enhance workplace safety represents a tangible example of this transformation. Unlike other types of surveillance systems, workplace safety cameras utilizing AI processing blur facial recognition data processed on-premises and instead track interactions and potential hazards, such as proximity to moving vehicles or speed, not employees.
Finally, national security considerations create an additional layer of complexity. Various states are imposing compliance requirements upon entities engaged in AI activities relating to data protection, export control and restrictions on investments made for the training of frontier AI models. Such restrictions could lead to investment disputes between states and foreign investors, according to White & Case LLP.
Officially recorded video of Supreme Court proceedings continues to exist solely as audio recordings. At the same time, federal courts continue to seek and require specific proof of harm in order to find liability under copyright infringement claims based upon AI-generated materials; courts no longer merely look at training data.
Similarly, delays in implementing the EU's Artificial Intelligence Act reflect lobbying successes by industries seeking limitations on how restrictive the Act should be. Similarly, regulatory developments in national security will begin triggering international investment disputes related to restrictions placed upon the use of frontier model training for AI.
The Supreme Court's experiment with visually depicting courtroom activity occurs at a time when recent surveys conducted among developers of efficient video diffusion models revealed serious computational costs associated with both inference and computation and therefore prevented widespread use in real-world applications, according to a study posted on arXiv. The technology supporting the On the Docket Project generates stylized depictions of courtroom activity and therefore is currently computationally expensive and technologically limited compared to future possibilities where AI-generated depictions may become indistinguishable from reality.
