When: Third Friday of each month at 1 PM Central Time (sometimes fourth Friday; next workshop: Friday, November 14, 1:00 to 3:00 p.m. Central Time).
What: First 90 minutes: Two presentations of CS+Law works in progress or new papers with open Q&A. Last 30 minutes: Networking.
Where: Zoom
Who: CS+Law faculty, postdocs, PhD students, and other students (1) enrolled in or who have completed a graduate degree in CS or Law and (2) engaged in CS+Law research intended for publication.
A Steering Committee of CS+Law faculty from Berkeley, Boston U., U. Chicago, Cornell, Georgetown, MIT, North Carolina Central, Northwestern, Ohio State, Penn, Technion, and UCLA organizes the CS+Law Monthly Workshop. A different university serves as the chair for each monthly program and sets the agenda.
Why: The Steering Committee’s goals include building community, facilitating the exchange of ideas, and getting students involved. To accomplish this, we ask that participants commit to attending regularly.
Computer Science + Law is a rapidly growing area. It is increasingly common that a researcher in one of these fields must interact with the other discipline. For example, there is significant research in each field regarding the law and regulation of computation, the use of computation in legal systems and governments, and the representation of law and legal reasoning. There has been a significant increase in interdisciplinary research collaborations between researchers from CS and Law. Our goal is to create a forum for the exchange of ideas in a collegial environment that promotes building community, collaboration, and research that helps to further develop CS+Law as a field.
Please join us for our next CS+Law Research Workshop online on Friday, November 14, 1:00 to 3:00 p.m. Central Time (Chicago Time).
Workshop 35 Organizer: UCLA (John Villasenor)
Agenda:
20-minute presentation - James Grimmelmann, Benjamin Sobel, and David Stein
10-minute discussion
20-minute presentation - Aileen Nielsen, Chelse Swoopes, and Elena Glassman
10-minute discussion
30-minute open Q&A about both presentations
30-minute open discussion
Presentation 1: Generative Misinterpretation
Presenter: James Grimmelmann, Professor, Cornell Law School and Cornell Tech; Benjamin Sobel, Assistant Professor, Law at the University of Wisconsin Law School; David Stein, Assistant Professor of Law, Vanderbilt Law School.
Abstract:
In a series of provocative experiments, a loose group of scholars, lawyers, and judges has endorsed generative interpretation: asking large language models (LLMs) like ChatGPT and Claude to resolve interpretive issues from actual cases. With varying degrees of confidence, they argue that LLMs are (or will soon be) able to assist-or even replace-judges in performing interpretive tasks like determining the meaning of a term in a contract or statute. A few go even further and argue for using LLMs to decide entire cases and to generate opinions supporting those decisions.
We respectfully dissent. In this Article, we show that LLMs are not yet fit for purpose for use in judicial chambers. Generative interpretation, like all empirical methods, must bridge two gaps to be useful and legitimate. The first is a reliability gap: are its methods consistent and reproducible enough to be trusted in high-stakes, real-world settings? Unfortunately, as we show, LLM proponents' experimental results are brittle and frequently arbitrary. The second is an epistemic gap: do these methods measure what they purport to? Here, LLM proponents have pointed to (1) LLMs' training processes on large datasets, (2) empirical measures of LLM outputs, (3) the rhetorical persuasiveness of those outputs, and (4) the assumed predictability of algorithmic methods. We show, however, that all of these justifications rest on unstated and faulty premises about the nature of LLMs and the nature of judging.
The superficial fluency of LLM-generated text conceals fundamental gaps between what these models are currently capable of and what legal interpretation requires to be methodologically and socially legitimate. Put simply, any human or computer can put words on a page, but it takes something more to turn those words into a legitimate act of legal interpretation. LLM proponents do not yet have a plausible story of what that "something more" comprises.
Presentation 2: Law is vulnerable to AI influence; interface design can help
Presenter: Aileen Nielsen, Visiting Assistant Professor of Law, Harvard Law School; Chelse Swoopes, PHD Student in Computer Science, John A. Paulson School of Engineering and Applied Sciences; Elena Glassman, Assistant Professor of Computer Science, John A. Paulson School of Engineering and Applied Sciences.
Abstract:
As large language models (LLMs) enter judicial workflows, courts face mounting risks of uncritical reliance, conceptual brittleness, and procedural opacity in the unguided use of these tools. Jurists’ early ventures have attracted both praise and scrutiny, yet they have unfolded without critical attention to the role of interface design. This Essay argues that interface design is not a neutral conduit but rather a critical variable in shaping how judges can and will interact with LLM-generated content. Using Judge Newsom’s recent concurrences in Snell and Deleon as case studies, we show how more thoughtfully designed, AI-resilient interfaces could have mitigated problems of opacity, reproducibility, and conceptual brittleness identified in his explorative LLM-informed adjudication.
We offer a course correction on the legal community’s uncritical acceptance of the chat interface for LLM-assisted work. Proprietary consumer-facing chat interfaces are deeply problematic when used for adjudication. Such interfaces obscure the underlying stochasticity of model outputs and fail to support critical engagement with such outputs. In contrast, we describe existing, open-source interfaces designed to support reproducible workflows, enhance user awareness of LLM limitations, and preserve interpretive agency. Such tools could encourage judges to scrutinize LLM outputs, in part by offering affordances for scaling, archiving, and visualizing LLM outputs that are lacking in proprietary chat interfaces. We particularly caution against the uncritical use of LLMs in “hard cases,” where human uncertainty may perversely increase reliance on AI tools just when those tools may be more likely to fail.
Beyond critique, we chart a path forward by articulating a broader vision for AI-resilient law: a system of incorporating law that would support judicial transparency, improve efficiency without compromising legitimacy, and open new possibilities for LLM-augmented legal reading and writing. Interface design is essential to legal AI governance. By foregrounding the design of human-AI interactions, this work proposes to reorient the legal community toward a more principled and truly generative approach to integrating LLMs into legal practice.
Join our group to get the agenda and Zoom information for each meeting and engage in the CS+Law discussion.
Submit a proposed topic to present. We strongly encourage the presentation of works in progress, although we will consider the presentation of more polished and published projects.
Friday, September 20, 1:00 to 3:00 p.m. Central Time (Organizer: Northwestern)
Friday, October 18, 1:00 to 3:00 p.m. Central Time (Organizer: UC Berkeley)
Friday, November 15, 1:00 to 3:00 p.m. Central Time (Organizer: University of Chicago)
Friday, January 17, 1:00 to 3:00 p.m. Central Time (Organizer: UPenn)
Friday, February 21, 1:00 to 3:00 p.m. Central Time (Organizer: Cornell)
Friday, March 21, 1:00 to 3:00 p.m. Central Time (Organizer: Tel Aviv University + Harvard)
Friday, April 18, 1:00 to 3:00 p.m. Central Time (Organizer: TBD)
Friday, May 16, 1:00 to 3:00 p.m. Central Time (Organizer: Georgetown)
Ran Canetti (Boston U.)
Bryan Choi (Ohio State)
Aloni Cohen (U. Chicago)
April Dawson (North Carolina Central)
James Grimmelmann (Cornell Tech)
Jason Hartline (Northwestern)
Dan Linna (Northwestern)
Paul Ohm (Georgetown)
Pamela Samuelson (Berkeley)
Inbal Talgam-Cohen (Technion - Israel Institute of Technology)
John Villasenor (UCLA)
Rebecca Wexler (Berkeley)
Christopher Yoo (Penn)
Northwestern Professors Jason Hartline and Dan Linna convened an initial meeting of 21 CS+Law faculty at various universities on August 17, 2021 to propose a series of monthly CS+Law research conferences. Hartline and Linna sought volunteers to sit on a steering committee. Hartline, Linna, and their Northwestern colleagues provide the platform and administrative support for the series.