The Procedural Frontier of AI Litigation

Scholars discuss the quasi-regulatory role of courts as AI transforms litigation.

In one of the latest developments in artificial intelligence (AI) litigation, a U.S. Magistrate Judge issued a discovery order requiring OpenAI, a technology company that develops and deploys artificial intelligence models, to preserve all ChatGPT user logs. The order applied even to logs that users had previously deleted.

The ruling triggers ongoing debate over whether the judiciary exceeded its proper role by compelling a private technology platform to retain vast quantities of ephemeral user data, or whether the court simply intervened in a regulatory vacuum left by inaction from the legislative and executive branches.

As litigation involving AI systems continues to accelerate and comprehensive regulatory guidance remains lacking, courts are increasingly assuming a regulatory role as they shape de facto legal standards for discovery, data retention, and platform accountability through incremental, case-by-case adjudication.

Some scholars argue that courts can and should adapt existing legal frameworks to confront the procedural and evidentiary challenges posed by artificial intelligence. They contend that doctrines like Rule 26 of the Federal Rules of Civil Procedure and the work product doctrine provide practical tools for managing the asymmetries AI introduces to civil litigation. They emphasize that legal technologies are already reshaping the adversarial process, and that judicial engagement can help clarify litigant obligations, reduce informational disparities, and preserve procedural fairness.

Other scholars warn that applying analog doctrines to dynamic, adaptive technologies risks distorting both discovery and evidentiary outcomes in unpredictable and far-reaching ways. They highlight the difficulty of authenticating AI-generated records and caution that traditional assumptions about authorship and reliability may not hold for machine-produced content in contemporary litigation contexts. Some scholars urge courts to focus more directly on issues such as explainability, adversarial manipulation, and data provenance when considering the admissibility of AI-generated outputs.

Furthermore, several commentators caution that courts are increasingly being asked to interpret novel privacy and data retention claims in litigation involving generative AI. They note that discovery orders compelling the preservation of ephemeral logs may conflict with users’ expectations of deletion in the digital age, and that judicial interpretations of constitutional data rights may outpace legislative or regulatory developments. These commentators suggest that courts should take care not to entrench frameworks that were not designed to handle AI’s speed, scale, or opacity.

In this week’s Saturday Seminar, scholars explore whether courts are becoming de facto AI regulators and the implications on civil procedure.

  • In a recent article published in Judicature, Maura R. Grossman of the University of Waterloo, Paul W. Grimm of Duke Law School, and Cary Coglianese of the University of Pennsylvania examine the pitfalls of using AI in courtrooms, particularly as evidence and in decision-making processes. The authors warn that courts are currently ill-equipped to differentiate between authentic and AI-generated evidence, particularly deepfakes that could prejudice jurors and undermine evidentiary reliability. They emphasize that AI systems used in legal proceedings must meet rigorous standards of validity, reliability, and fairness. To meet this challenge, the authors call for both stronger procedural safeguards under existing Federal Rules of Evidence and the development of new regulations designed specifically to govern the authenticity and admissibility of AI-generated content.
  • In a forthcoming article, Molly Cinnamon examines how California’s Delete Act, which requires data brokers to delete residents’ personal information upon request, can withstand First Amendment Cinnamon analyzes the U.S. Supreme Court’s decision in Sorrell v. IMS Health Inc., which struck down Vermont’s prescription confidentiality law as violating free speech rights, and its implications for privacy legislation. Cinnamon argues that the Delete Act merits intermediate scrutiny because it regulates commercial speech in a neutral manner and advances significant government interests in protecting consumer privacy and preventing fraud. She warns that striking down the Delete Act would fundamentally weaken individuals’ control over their personal data and could threaten decades-old privacy laws like the Health Insurance Portability and Accountability Act (HIPAA) and the Fair Credit Reporting Act.
  • David Freeman Engstrom of Stanford Law School and Jonah B. Gelbach of Berkeley Law argue in a University of Pennsylvania Law Review article that courts are increasingly shaping the regulatory landscape of legal technology by interpreting and adapting civil procedure rules built for an analog world. Through the three case studies on e-discovery tools, outcome-predictive tools, and the legal analytics tools, the authors show how legal technology alters cost and information asymmetries, and how those asymmetries influence procedural outcomes. Judges, they emphasize, are already making foundational decisions that affect the value, deployment, and evolution of AI tools in litigation. These rulings, they warn, effectively determine how AI tools operate in practice, positioning courts as front-line arbiters of innovation within the adversarial system.
  • In a chapter published in Electronic Evidence and Electronic Signatures, Steven J. Murdoch, Daniel Seng, Burkhard Schafer, and Stephen Mason argue that AI-generated records challenge long-standing legal assumptions about authorship, reliability, and system integrity. They caution that applying traditional evidentiary rules to system-dependent, opaque AI outputs risks distorting legal processes. The authors urge courts to bring more rigorous attention to issues such as explainability, adversarial manipulation, and data provenance. They warn that courts are being thrust into a quasi-regulatory role as they grapple with the evidentiary status of AI.
  • In a recent article in Columbia Science & Technology Law Review, Maura R. Grossman of the University of Waterloo and Judge Paul W. Grimm of Duke Law School highlight evidentiary challenges posed by generative AI, particularly deepfakes. Grossman and Grimm argue that courts are currently ill-equipped to differentiate between authentic and synthetic evidence. They warn of the significant risk that deepfakes will prejudice jurors by undermining the reliability of evidence presented at trial. Grossman and Grimm recommend steps lawyers may take to guard against dubious, potentially-AI-generated evidence within the framework of the existing Federal Rules of Evidence, and explore potential new rules explicitly tailored to address the authentication and admissibility of AI-generated evidence and preserve the integrity of judicial processes.
  • In a forthcoming article, Hannah Ruschemeier, a professor at University of Hagen, highlights how generative AI tests the boundaries of supranational, such as European Union, and U.S. data protection laws. Ruschemeier explains that popular AI models, especially large language models, scrape vast amounts of personal data from the internet, raising compliance issues under the EU’s General Data Protection Regulation. She notes that current AI training practices undermine core data protection principles—including purpose-based limitation and data minimization—and complicate enforcement of individual data rights. Ruschemeier urges industry participants and regulators to strike a balance between the need for broad data collection in support of AI training and data protection principles and individual data rights.

The Saturday Seminar is a weekly feature that aims to put into written form the kind of content that would be conveyed in a live seminar involving regulatory experts. Each week, The Regulatory Review publishes a brief overview of a selected regulatory topic and then distills recent research and scholarly writing on that topic.