Past Events

Workshop 28: Friday, October 25 U. of California Berkeley (Pamela Samuelson and Rebecca Wexler)

But are these terms truly meaningful, or merely a mirage? There are myriad examples where these broad terms are regularly and repeatedly violated. Yet except for some account suspensions on platforms, no model creator has actually tried to enforce these terms with monetary penalties or injunctive relief. This is likely for good reason: we think that the legal enforceability of these licenses is questionable. This Article provides a systematic assessment of the enforceability of AI model terms of use and offers three contributions.

First, we pinpoint a key problem with these provisions: the artifacts that they protect, namely model weights and model outputs, are largely not copyrightable, making it unclear whether there is even anything to be licensed. 

Second, we examine the problems this creates for other enforcement pathways. Recent doctrinal trends in copyright preemption may further undermine state-law claims, while other legal frameworks like the DMCA and CFAA offer limited recourse. And anti-competitive provisions likely fare even worse than responsible use provisions. 

Third, we provide recommendations to policymakers considering this private enforcement model. There are compelling reasons for many of these provisions to be unenforceable: they chill good faith research, constrain competition, and create quasi-copyright ownership where none should exist. There are, of course, downsides: model creators have even fewer tools to prevent harmful misuse. But we think the better approach is for statutory provisions, not private fiat, to distinguish between good and bad uses of AI and restrict the latter. And, overall, policymakers should be cautious about taking these terms at face value before they have faced a legal litmus test.

Workshop 27: Friday, September 20 Northwestern University (Jason Hartline and Dan Linna)

Workshop 26: Friday, May 17 Tel Aviv + Hebrew Universities (Inbal Talgam-Cohen and Katrina Ligett)

Workshop 25: Friday, April 19 Ohio State (Bryan Choi

Workshop 24: Friday, March 22 Cornell (James Grimmelmann) 


If you'd like to read ahead of time, the paper is available here: https://github.com/slawsk/tax-formalization/blob/main/FormalizationReasoningPaper.pdf 

Given the growing role online services play in data collection, commerce, and speech, these broken innovation and competition incentives have far-reaching effects. Fixing those incentives is urgent. Policymakers and commentators blame the concentration of online services on structural market failures and turn to antitrust remedies for solutions. This pervasive narrative focuses on a symptom, not the cause. I argue that tech concentration is an artifact of IP law’s failure to keep up with technology.

This article proposes a program for IP reform: we should replace the trade-motivated aspects of software IP law with expanded trade regulation. Drawing on common-law misappropriation as a model, I sketch one politically pragmatic option for implementing those reforms.
Beyond this article’s focus on software innovation, it serves as a case study describing the mechanics behind a law falling out of sync with technology. As such, it may help policymakers avoid similar legislative and regulatory pitfalls as they regulate emerging and fast-changing technologies.

If you'd like to read ahead of time, the paper is available here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4546335 

Workshop 23: Friday, February 23 Cal Berkeley (Rebecca Wexler) 

Workshop 22: Thursday, January 18 Georgetown (Paul Ohm) 

Workshop 21: Friday, December 15 Penn (Christopher Yoo)

Workshop 20: Friday, November 17 Boston University (Ran Canetti)

Workshop 19: Monday, October 23 UCLA (John Villasenor)

Workshop 18: Tuesday, September 26 Northwestern University (Jason Hartline and Dan Linna

Workshop 17: Friday, May 19 Georgetown (Paul Ohm and Ayelet Gordon-Tapiero)

Workshop 16: Friday, April 21 Berkeley (Rebecca Wexler)

The Integrity of Our Convictions

Workshop 15: Friday, March 17 UPenn (Christopher Yoo

One obvious countermeasure would be to require Internet sites to strongly authenticate their users, but this is not an easy problem. Furthermore, while that would provide accountability for the immediate upload, such a policy would cause other problems—the ability to speak anonymously is a vital constitutional right. Also, it often would not help identify the original offender—many people download images from one site and upload them to another, which adds another layer of complexity.

We instead propose a more complex scheme, based on a privacy-preserving cryptographic credential scheme originally devised by Jan Camenisch and Anna Lysyanskaya. We arrange things so that three different parties must cooperate to identify a user who uploaded an image. We perform a legal analysis of the acceptability of this scheme under the First Amendment and its implied guarantee of the right to anonymous speech, show how this must be balanced against the victim's right to sexual privacy, discuss the necessary changes to §230 (and the constitutional issues with these changes), and the legal standards for obtaining the necessary court orders—or opposing their issuance.

Workshop 14: Friday, February 17 University of Chicago (Aloni Cohen)

Workshop 13: Friday, January 20, 2023, MIT (Dazza Greenwood)

Workshop 12: Friday, December 16, 2022, Boston University (Ran Canetti)

Workshop 11: Friday, November 18, 2022, UCLA (John Villasenor)

Workshop 10: Friday, October 28, 2022, Cornell University (James Grimmelmann)

This is important because decisions based on algorithmic groups can be harmful. If a loan applicant scrolls through the page quickly or uses only lower caps when filling out the form, their application is more likely to be rejected. If a job applicant uses browsers such as Microsoft Explorer or Safari instead of Chrome or Firefox, they are less likely to be successful. Non-discrimination law aims to protect against similar types of harms, such as equal access to employment, goods, and services, but has never protected “fast scrollers” or “Safari users”. Granting these algorithmic groups protection will be challenging because historically the European Court of Justice has remained reluctant to extend the law to cover new groups.

This paper argues that algorithmic groups should be protected by non-discrimination law and shows how this could be achieved. Full paper available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4099100

Workshop 9: Friday, September 23, 2022, Organized by Northwestern University (Jason Hartline and Dan Linna)

Workshop 7: Friday, April 15, 2022, Organized by MIT (Lecturer and Research Scientist Dazza Greenwood)

Workshop 6: Friday, March 11, 2022, Organized by University of Pittsburgh (Professor Kevin Ashley)

Workshop 5: Friday, February 18, 2022, Organized by University of Chicago (Professor Aloni Cohen)

Workshop 4: Friday, January 21, 2022, Organized by UCLA (Professor John Villasenor)

Workshop 3: Friday, November 19, 2022, Organized by University of Pennsylvania (Professor Christopher S. Yoo)

Workshop 2: Friday, October 22, 2022, Organized by University of California Berkeley (Professors Rebecca Wexler and Pamela Samuelson)

Workshop 1: Friday, September 17, 2022, Organized by Northwestern University (Professors Jason Hartline and Dan Linna)