When: Third Friday of each month at 1 PM Central Time (sometimes fourth Friday; next workshop: Friday, October 17, 1:00 to 3:00 p.m. Central Time).
What: First 90 minutes: Two presentations of CS+Law works in progress or new papers with open Q&A. Last 30 minutes: Networking.
Where: Zoom
Who: CS+Law faculty, postdocs, PhD students, and other students (1) enrolled in or who have completed a graduate degree in CS or Law and (2) engaged in CS+Law research intended for publication.
A Steering Committee of CS+Law faculty from Berkeley, Boston U., U. Chicago, Cornell, Georgetown, MIT, North Carolina Central, Northwestern, Ohio State, Penn, Technion, and UCLA organizes the CS+Law Monthly Workshop. A different university serves as the chair for each monthly program and sets the agenda.
Why: The Steering Committee’s goals include building community, facilitating the exchange of ideas, and getting students involved. To accomplish this, we ask that participants commit to attending regularly.
Computer Science + Law is a rapidly growing area. It is increasingly common that a researcher in one of these fields must interact with the other discipline. For example, there is significant research in each field regarding the law and regulation of computation, the use of computation in legal systems and governments, and the representation of law and legal reasoning. There has been a significant increase in interdisciplinary research collaborations between researchers from CS and Law. Our goal is to create a forum for the exchange of ideas in a collegial environment that promotes building community, collaboration, and research that helps to further develop CS+Law as a field.
Please join us for our next CS+Law Research Workshop online on Friday, October 17, 1:00 to 3:00 p.m. Central Time (Chicago Time).
Workshop 34 Organizers: Northwestern (Jason Hartline and Dan Linna)
Agenda:
20-minute presentation - Cullen O’Keefe
10-minute discussion
20-minute presentation - Mark Riedl
10-minute discussion
30-minute open Q&A about both presentations
30-minute open discussion
Presentation 1: Law-Following AI: Designing AI Agents to Obey Human Laws
Presenter: Cullen O’Keefe, Director of Research, Institute for Law & AI
Abstract:
Artificial intelligence (AI) companies are working to develop a new type of actor: “AI agents,” which we define as AI systems that can perform computer-based tasks as competently as human experts. Expert-level AI agents will likely create enormous economic value but also pose significant risks. Humans use computers to commit crimes, torts, and other violations of the law. As AI agents progress, therefore, they will be increasingly capable of performing actions that would be illegal if performed by humans. Such lawless AI agents could pose a severe risk to human life, liberty, and the rule of law.
Designing public policy for AI agents is one of society’s most important tasks. With this goal in mind, we argue for a simple claim: in high-stakes deployment settings, such as government, AI agents should be designed to rigorously comply with a broad set of legal requirements, such as core parts of constitutional and criminal law. In other words, AI agents should be loyal to their principals, but only within the bounds of the law: they should be designed to refuse to take illegal actions in the service of their principals. We call such AI agents “Law-Following AIs” (LFAI).
The idea of encoding legal constraints into computer systems has a respectable provenance in legal scholarship. But much of the existing scholarship relies on outdated assumptions about the (in)ability of AI systems to reason about and comply with open-textured, natural-language laws. Thus, legal scholars have tended to imagine a process of “hard-coding” a small number of specific legal constraints into AI systems by translating legal texts into formal machine-readable computer code. Existing frontier AI systems, however, are already competent at reading, understanding, and reasoning about natural-language texts, including laws. This development opens new possibilities for their governance.
Based on these technical developments, we propose aligning AI systems to a broad suite of existing laws as part of their assimilation into the human legal order. This would require directly imposing legal duties on AI agents. While this would be a significant change to legal ontology, it is both consonant with past evolutions (such as the invention of corporate personhood) and consistent with the emerging safety practices of several leading AI companies.
This Article aims to catalyze a field of technical, legal, and policy research to develop the idea of law-following AI more fully. It also aims to flesh out LFAI’s implementation so that our society can ensure that widespread adoption of AI agents does not pose an undue risk to human life, liberty, and the rule of law. Our account and defense of law-following AI is only a first step and leaves many important questions unanswered. But if the advent of AI agents is anywhere near as important as the AI industry supposes, then law-following AI may be one of the most neglected and urgent topics in law today, especially in light of increasing governmental adoption of AI.
Presentation 2: Legally Informed Explainable AI
Presenter: Mark Riedl, Professor, Georgia Tech
Abstract:
Explanations for artificial intelligence (AI) systems are intended to support the people who are impacted by AI systems in high-stakes decision-making environments, such as doctors, patients, teachers, students, housing applicants, and many others. To protect people and support the responsible development of AI, explanations need to be actionable--helping people take pragmatic action in response to an AI system--and contestable--enabling people to push back against an AI system and its determinations. For many high-stakes domains, such as healthcare, education, and finance, the sociotechnical environment includes significant legal implications that impact how people use AI explanations. For example, physicians who use AI decision support systems may need information on how accepting or rejecting an AI determination will protect them from lawsuits or help them advocate for their patients. In this paper, we make the case for Legally-Informed Explainable AI, responding to the need to integrate and design for legal considerations when creating AI explanations. We describe three stakeholder groups with different informational and actionability needs, and provide practical recommendations to tackle design challenges around the design of explainable AI systems that incorporate legal considerations.
Join our group to get the agenda and Zoom information for each meeting and engage in the CS+Law discussion.
Submit a proposed topic to present. We strongly encourage the presentation of works in progress, although we will consider the presentation of more polished and published projects.
Friday, September 20, 1:00 to 3:00 p.m. Central Time (Organizer: Northwestern)
Friday, October 18, 1:00 to 3:00 p.m. Central Time (Organizer: UC Berkeley)
Friday, November 15, 1:00 to 3:00 p.m. Central Time (Organizer: University of Chicago)
Friday, January 17, 1:00 to 3:00 p.m. Central Time (Organizer: UPenn)
Friday, February 21, 1:00 to 3:00 p.m. Central Time (Organizer: Cornell)
Friday, March 21, 1:00 to 3:00 p.m. Central Time (Organizer: Tel Aviv University + Harvard)
Friday, April 18, 1:00 to 3:00 p.m. Central Time (Organizer: TBD)
Friday, May 16, 1:00 to 3:00 p.m. Central Time (Organizer: Georgetown)
Ran Canetti (Boston U.)
Bryan Choi (Ohio State)
Aloni Cohen (U. Chicago)
April Dawson (North Carolina Central)
James Grimmelmann (Cornell Tech)
Jason Hartline (Northwestern)
Dan Linna (Northwestern)
Paul Ohm (Georgetown)
Pamela Samuelson (Berkeley)
Inbal Talgam-Cohen (Technion - Israel Institute of Technology)
John Villasenor (UCLA)
Rebecca Wexler (Berkeley)
Christopher Yoo (Penn)
Northwestern Professors Jason Hartline and Dan Linna convened an initial meeting of 21 CS+Law faculty at various universities on August 17, 2021 to propose a series of monthly CS+Law research conferences. Hartline and Linna sought volunteers to sit on a steering committee. Hartline, Linna, and their Northwestern colleagues provide the platform and administrative support for the series.