Increasingly complex AI cases require juries of experts and professional peers rather than a random selection of citizens.
The complexity of the technology underlying AI litigation raises uncomfortable but important questions: Can ordinary jurors render rational verdicts based on factual records about AI that may challenge even the most learned AI researcher? And, if not, what alternatives should courts use to preserve the core function of the right to jury trial under the Seventh Amendment?
The traditional retelling of the right to a jury in civil cases details a two-part analysis to determine its availability: first, whether the claim qualifies as “legal” or “equitable” and, second, the nature of the remedy sought.
The U.S. Supreme Court, however, has hinted at a third inquiry: “the practical abilities and limitations of juries.” This hint—buried in a footnote in a relatively obscure case from 1970—has nevertheless encouraged some courts to qualify the right to a jury trial based on the capacity of jurors to understand the applicable evidence and law.
The U.S. Court of Appeals for the Third Circuit, for instance, heard what the Supreme Court whispered. In 1980, in a case known as In re Japanese Electronic Products Antitrust Litigation, the appellate court vacated the district court’s decision based on its erroneous conclusion that the complexity of the case “is not a constitutionally permissible reason for striking a party’s jury demands.”
According to the Japanese Electronics Products court, due process “requires some fair assurance that the jury’s findings of fact and applications of legal rules are reasonably correct. When a jury is unable to understand the evidence and the legal rules, it cannot provide this measure of assurance.” Due process, the court explained, “guarantees a comprehending factfinder.”
Consequently, the court held that where a case involves such “extraordinary complexity” that the jury cannot rationally decide the issues, the Seventh Amendment may not override the due process rights of the parties.
Although few courts have formally followed the Third Circuit’s identification of a “complexity” exception to the Seventh Amendment, use of that exception has yet to be overturned. Assuming that the Supreme Court’s hint and the Third Circuit’s reasoning are good law, it would seem that cases turning on AI would fit into this exception.
Admittedly, what qualifies as an issue of “extraordinary complexity” is, in and of itself, a complex question. The Third Circuit listed three factors. The first factor is the overall size of the lawsuit, including the length of the trial, the amount of evidence, and the number of issues that require individual attention. Another factor rests with the conceptual difficulties surrounding the legal issues and their factual predicates, commonly measured by the amount of expert testimony. Finally, the court considered the difficulty of segregating distinct aspects of the case, indicated by the number of separately disputed issues related to single transactions.
Lawsuits involving generative AI seem likely to cross off each of these factors. On the size of the lawsuits, the first batch of cases suggest that the widespread use of AI as well as its application in economic contexts leads to suits involving numerous parties and substantial damage claims—indicators of lengthy trials.
Likewise, early suits have demonstrated the complexity of litigating AI-related claims. By way of example, OpenAI attempted to dismiss a libel suit brought against it by pointing out that generative AI models occasionally “hallucinate” and provide false information. Why models hallucinate—as well as what constitutes a hallucination, and whether and how hallucinations can be prevented—all present conceptually difficult inquiries.
And on the difficulty of segregating distinct aspects of the case: Given that issues in AI litigation may all require an understanding of the underlying model, this factor also tends toward such suits qualifying as extraordinarily complex.
If these due process factors dictate that traditional juries not try certain AI cases, then who should play factfinder?
History, thankfully, provides an answer: “blue ribbon juries,” or juries made up of persons with specialized knowledge.
Consider a litany of instances where such blue ribbon juries have been used in the past and have increased the odds of a jury competently and rationally deciding a dispute:
In 1394, a jury of “cooks and fishmongers” presided over the prosecution of a defendant accused of selling bad food. Parties impaneled a jury of booksellers and printers in a 1663 libel trial, and early King’s Bench cases report juries of clerks and attorneys when the issue was falsification of writs by attorneys and extortion by court officials.
Notably, this practice took root in America. The states of New York and South Carolina relied on special juries in commercial litigation into the 19th century. Likewise, Louisiana impaneled merchant juries until 1846. Changes in commercial norms and law brought an end to such juries.
Nevertheless, although this practice may not be common in modern times, it has not disappeared from civil litigation. Case in point: Under a 1987 statute, state court judges in Delaware have the authority to “order a special jury upon the application of any party in a complex civil case.”
There is an understandable and warranted concern about reliance on “blue ribbon juries” in cases that may otherwise go before traditional juries. Failure to create such expert juries, though, can undermine the public interest. A dozen citizens can be impaneled in a case that stretches for months, if not years, and the rest of the country may suffer the consequences of a decision grounded in something other than a rational and full analysis of the evidence.