Book Review: Nick Seaver, Computing Taste: Algorithms and the Makers of Music Recommendation

(University of Chicago Press, 2022)

In Computing Taste, Nick Seaver provides an ethnographic exploration of the world of music recommendation systems, revealing how algorithms are deeply shaped by the humans who design them. He shows how the algorithms that drive music recommendations are shaped by human judgment, creativity, and cultural assumptions. The data companies collect, the way they construct models, how they intuitively test whether their models are working, and how they define success are all deeply human and subjective choices.

Beyond Man vs. Machine

Seaver points out that textbook definitions describe algorithms as “well-defined computational procedures” that take inputs and generate outputs, portraying them as deterministic and straightforward systems. This narrow view leads to a man-versus-machine narrative that is trite and unilluminating. Treating algorithms as though their defining quality is the absence of human influence reinforces misconceptions about their neutrality. Instead, Seaver advocates for focusing on the sociotechnical arrangements that produce different forms of “humanness and machineness,” echoing observations by Donna Haraway and others.

In practice, algorithmic systems are messy, constantly evolving, and shaped by human judgment. As Seaver notes, “these ‘cultural’ details are technical details,” meaning that the motivations, preferences, and biases of the engineering teams that design algorithms are inseparable from the technical aspects of the systems themselves. Therefore, understanding algorithms requires acknowledging the social and cultural contexts in which they operate.

From Information Overload to Capture

Seaver shows how the objective of recommendation systems has shifted from the founding myth of information overload to the current obsession with capturing user attention. Pioneers of recommender systems told stories of information overload that presented growing consumer choice as a problem in need of a solution. The notion of overwhelming users with too much content has been a central justification for creating algorithms designed to filter and organize information. If users are helpless in the face of vast amounts of data, algorithms become necessary tools to help them navigate this digital landscape. Seaver argues that the framing of overload justifies the control algorithms exert over what users see, hear, and engage with. The idea of “too much music” or “too much content” becomes a convenient rationale for developing systems that, in practice, do more than assist—they guide, constrain, and shape user choices.

In any event, commercial imperatives soon led to rationales based on information overload giving way to narratives of capture. Seaver compares recommender systems to traps designed to “hook” users, analyzing how metrics such as engagement and retention guide the development of algorithms. Seaver traces the evolution of recommender systems from their origins as tools to help users navigate the overwhelming abundance of digital content to their current role in capturing and retaining user attention. The Netflix Prize, a 2006 competition aimed at improving Netflix’s recommendation algorithm, serves as a key example of this shift. Initially, algorithms were designed to help users manage “information overload” by personalizing content based on user preferences, as Netflix sought to predict what users would enjoy. However, Netflix never used the winning entry. As streaming services became central to Netflix’s business model, the focus of recommendation systems shifted from merely helping users find content to keeping them engaged on the platform for as long as possible. This transition from personalization to attention retention shows the shift in the industry’s goals. Recommender systems, including those at Netflix, began to focus on encouraging continuous engagement by suggesting binge-worthy content to maximize viewing hours, implementing autoplay features to keep the next episode or movie rolling without user interaction, and focusing on actual viewing habits (e.g., “skip intro” clicks, time spent on a show, completion rates) rather than ratings to keep users hooked.

Seaver’s perspective is insightful, not unrelentingly critical. The final chapter investigates how the design of recommendation systems reflects the metaphor of a “park”—a managed, curated space that users are guided through. Recommender systems are neither strictly benign nor malign, but they do entail a loss of user agency. We, the listening public, are not trapped animals so much as a managed flock. Seaver recognizes that recommendation systems open up new possibilities for exploration while also constraining user behavior by narrowing choices based on past preferences.

Why Do My Playlists Still Suck?

The book also answers the question that motivated me to read it: why do my playlists still suck? No one has a good model for why we like the music that we like, when we like it, or how that extrapolates to music we haven’t heard yet. And Spotify and other corporate interests have no real interest in solving that puzzle for us. The algorithms that shape our cultural lives now prioritize engagement, rely on past behavior, and reflect a grab bag of assumptions about user preferences that are often in conflict. There is very little upside to offering us fresh or risky suggestions when a loop of familiarity will keep us more reliably engaged.

London Marathon!

On April 27, 2025, I will be running the London Marathon to raise money for research and treatment of pancreatic cancer.

The future is what we make it

My sister Rebecca was diagnosed with pancreatic cancer in 2018. She was as brave as anyone could be, but her battle was short-lived; Becky passed away just weeks after her diagnosis. Her story is typical, pancreatic cancer is almost always fatal because it is diagnosed too late.

But we can change this story.

I am asking you to help me raise money for Pancreatic Cancer UK which is pioneering efforts to develop earlier detection methods to give others the fighting chance that Becky deserved.

There are several different ways you can contribute

(1) donate directly to Pancreatic Cancer UK at https://2025tcslondonmarathon.enthuse.com/pf/matthew-sag (immediate impact)

(2) send me money via venmo that I will then aggregate and convert into GBP and donate in your name (@Matthew-Sag) (minimize foreign transaction fees)

(3) donate to a different pancreatic cancer in your country of choice, send me the details and I’ll match that with a donation to Pancreatic Cancer UK (local impact)

Every contributor gets to add a song to my London Marathon Spotify Playlist!

If you donate for this cause, you can let me know what song I should add to my London Marathon Spotify Playlist. I will set this to shuffle during the race and try to remember who suggested which song

Please join me 

Please join me on this journey and donate in Rebecca’s memory and bringing us closer to a future where no more lives are stolen by this devastating disease.

Follow my progress

To see how my training is going, follow me on Strava, check out my google spreadsheet (https://docs.google.com/spreadsheets/d/16hV2e-IxXbo01uM7_X5tGyOz6ONPtSZGJZdjKUeVH0k/edit?usp=sharing) or get narrative updates  here on this page (https://2025tcslondonmarathon.enthuse.com/pf/matthew-sag)

Playlist so far …

Queen, Don’t Stop Me Now (my pick)

The Weekend, Blinding Lights (my pick)

Monty Python, Always Look on the Bright Side of Life (Matt & Mindy Lawrence)

The Clash, London Calling (Spencer Waller)

Bruce Springsteen, Born to Run (Richard Fields)

Olivia Newton-John, Xanadu (Jo Groube)

The Beatles, Long and Winding Road (Jan & Andy Sag)

A response to Lee and Grimmelmann

TIM LEE (@binarybits) and JAMES GRIMMELMANN have written an insightful article on “Why The New York Times might win its copyright lawsuit against OpenAI” in Ars Technica and on Tim’s newsletter (https://www.understandingai.org/p/the-ai-community-needs-to-take-copyright).

Quite a few people emailed me asking for my thoughts, so here they are. This is a rough first take that began as a tweet before I realized it was too long.

Yes, we should take the NYT suit seriously

It’s hard to disagree with the bottom-line that copyright poses a significant challenge to copy-reliant AI, just as it has done to previous generations of copy-reliant technologies (reverse engineering, plagiarism detection, search engine indexing, text data mining for statistical analysis of literature, text data mining for book search).

One important insight offered by Tim and James is that building a useful technology that is consistent with some people’s rough sense of fairness, like MP3.com, is no guarantee of fair use. People loved Napster and probably would have loved MP3.com, but these services were essentially jukeboxes competing with record companies’ own distribution models for the exact same content. We could add ReDigi to this list, too. Unlike the copy-reliant technologies listed above, Napster, MP3.com, and ReDigi fell foul of copyright law because they made expressive uses of other people’s expressive works.

Tim and James make another important point, that academic researchers and Silicon Valley types might have got the wrong idea about copyright. Certainly, prior to November 2022 you almost never saw any mention of copyright in papers announcing new breakthroughs in text data mining, machine learning, or generative AI. This is why I wrote “Copyright Safety for Generative AI” (Houston Law Review 2023).

Tim and James’ third insight is that some conduct might be fair use in a small noncommercial scale but not fair use on a large commercial scale. This is right sometimes, but in fact, a lot of fair use scales up quite nicely. 2 Live crew sold millions of copies of their fair use parody of Roy Orbison’s Pretty Woman, and of course, some of the key non-expressive use precedents were all about different versions of text data mining at scale: iParadigms (commercial plagiarism detection), HathiTrust (text mining for statistical analysis of the literature, including machine learning), Google Books (commercial book search).

But how seriously?

I agree with Tim and James that the AI companies’ best fair use arguments will be some version of the non-expressive use argument I outlined in Copyright and Copy-Reliant Technology (2009) and several other papers since, such as The New Legal Landscape for Text Mining and Machine Learning (2019).

In a nutshell, that argument is that a technical process that creates some effectively invisible copies along way but ultimately produces only uncopyrightable facts, abstractions, associations, and styles should be fair use because it does not interfere with the author’s right to communicate her original expression to the public.

I also agree that this argument begins to unravel if generative AI models are in fact memorizing and delivering the underlying original expression from the training data. I don’t think we know enough about the facts to say whether individual examples of memorization are just an obscure bug or endemic problem.

The NYT v. OpenAI litigation will shed some light on this but there is a lot of discovery still to come. My gut feeling is that the NYT’s superficially compelling examples of memorization are actually examples of GPT-4 working as an agent to retrieve information from the Internet. This is still a copyright problem, but it’s a very small, easily fixed, copyright problem, not an existential threat to text data mining research, machine learning, and generative AI.

If the GPT series models are really memorizing and regurgitating vast swaths of NYT content, that is a problem for OpenAI. If pervasive memorization is unavoidable in LLMs, that would be a problem for the entire generative AI industry, but I very much doubt the premise. Avoiding memorization (or reducing to trivial levels) is a hard technical problem in LLMs, but not an impossible one.

Avoiding memorization in image models is more difficult because of the “Snoopy Problem.” Tim and James call this the “Italian plumber problem,” but I named it first and I like Snoopy better.

The Snoopy Problem is that the more abstractly a copyrighted work is protected, the more likely it is that a generative AI model will “copy” it. Text-to-image models are prone to produce potentially infringing works when the same text descriptions are paired with relatively simple images that vary only slightly. 

Generative AI models are especially likely to generate images that would infringe on copyrightable characters because characters like Snoopy appear often enough in the training data that the models learn the consistent traits and attributes associated with those names. Deduplication won’t solve this problem because the output can still infringe without closely resembling any particular image from the training data. Some people think this is really a problem with copyright being too loose with characters and morphing into trademark law. Maybe, but I don’t see that changing.

How serious is the Snoopy Problem? Tim and James frame the problem as though they innocently requested a combination of [Nationality] + [Occupation] + “from a video game” and just happened stumble upon repeated images of the world most famous Italian plumber, Mario from Mario Kart.

But of course, a random assortment of “Japanese software developers” “German fashion designers” “Australian novelists” “Kenyan cyclists” “Turkish archaeologists” and a “New Zealand plumber” don’t reveal any such problem. The problem is specific to Mario because he dominates representations of Italian plumbers from video games in the training data.

The Snoopy Problem presents a genuine difficulty for video, image, and multimodal generative AI, but it’s far from an existential threat. Partly, because the class of potential plaintiffs is significantly smaller. There are a lot fewer owners of visual copyrightable characters than there are just plain old copyright owners. And partly because the problem can be addressed in training, by monitoring prompts, or by filtering outputs.

Tim and James’s final point of concern is that the prospect of licensing markets for training data will undermine the case for fair use. Companies building AI models rely on the fact that they are simply scraping training data from the “open Internet,” the argument becomes more persuasive when these companies are more careful to avoid scraping content from sites where they are not welcome.

Respecting existing robots.txt signals and helping to develop more effective ones in the future will facilitate robust licensing markets for entities like the New York Times and the Associated Press.

I don’t think that OpenAI will need to sign a 100 million licensing deals before training its next model. Courts have already considered and rejected the circular argument that copyright owners must be given the right to charge for non-expressive uses to avoid the harm of not being able to charge for non-expressive uses. This specific argument was raised by the Authors Guild in HathiTrust and Google Books and squarely rejected in both.

Tim and James and their note of caution with a note of realism: judges will be reluctant to shut down an innovative and useful service with tens of millions of uses. We saw a similar dynamic when the US Supreme Court held that time shift in using videocassette recorders was fair use.

But there is another element of realism to add. If the US courts reject the idea that non-expressive uses should be fair use, most AI companies will simply move their scraping and training operations overseas to places like Japan, Israel, Singapore, and even the European Union. As long as the models don’t memorize the training data, they can then be hosted in the US without fear of copyright liability.

Tim and James are two of the smartest most insightful people writing about copyright and AI at the moment. The AI community should take them seriously, they should take copyright seriously, but they should not see Snoopy (or the Italian Plumber) as an existential threat.

PS: Updated to correct typos helpfully identified by ChatGPT.

Third Annual Legal Scholars Roundtable on Artificial Intelligence 2024,

Call For Papers

Roundtable

Emory Law is proud to host the third annual Legal Scholars Roundtable on Artificial Intelligence. The Roundtable will take place on April 11-12, 2024 at Emory University in Atlanta, Georgia. The Legal Scholars Roundtable on Artificial Intelligence (AI) is designed to be a forum for the discussion of current legal scholarship on AI, covering a range of methodologies, topics, perspectives, and legal intersections.

Format  
Participation at the Roundtable will be limited and invitation-only. Participants are expected to read all the papers in advance and be prepared to offer substantive comments. We will try to accommodate a limited number of Zoom-based participants, but in person attendance is strongly preferred.

Applications to present, comment, or participate
We invite applications to participate, to comment, and/or to present from academics working on any topic relating to legal issues in AI. To request to present, you need to submit a substantially complete draft paper. Microsoft word format is strongly preferred for these purposes, but you can submit a pdf version for broader distribution. The deadline for submission is February 23, 2024, and decisions on participation will be made shortly thereafter, ideally, by March 4, 2024. If selected, final manuscripts are due April 1, 2024, to permit all participants an opportunity to read the papers prior to the conference.

To apply to participate, comment, or present, please fill out the google form:(https://forms.gle/Ubv2maLWfMK5tbPs8).

What to expect from the Legal Scholars Roundtable on Artificial Intelligence
The Legal Scholars Roundtable on Artificial Intelligence is a forum for the discussion of current legal scholarship on AI, spanning a range of methodologies, topics, perspectives, and legal intersections. Authors who present at the Roundtable will be selected from a competitive application process, and commentators are assigned based on their expertise. Participants will have an opportunity to provide direct feedback in paper sessions and will have access to draft papers but will be asked not to post papers publicly or share without author permission. Robust sessions involve energetic feedback from other paper authors, commentators, and participants. Our goal is to ensure all authors have the full participation of all workshop participants in each author’s session.

Essential logistics
The Roundtable will be held in person on the Emory campus in Atlanta, Georgia. The conference will begin on Thursday morning and run until 1PM on Friday. You can expect to be at the Atlanta airport by 1:45 PM, in time for a 2:30 PM flight or later on Friday. We will pay for your reasonable (economy) travel and accommodation expenses within the U.S. At the roundtable you will be well fed and caffeinated.

Organizers
Matthew Sag, Professor of Law in Artificial Intelligence, Machine Learning, and Data Science at Emory University Law School ([email protected])
Charlotte Tschider, Associate Professor at Loyola Law Chicago ([email protected])

Emory Law’s Commitment to AI
Emory University recognizes that artificial intelligence (AI) is a transformative technology that is already reshaping almost every aspect of our lives. Through its AI.Humanity initiative, Emory is building capacity in key areas of AI research and policy, including health care, medical research, business, law, and the humanities.

My testimony to the US Senate Judiciary Subcommittee on IP re: Copyright and AI

I had the great honor of testifying to the US Senate Judiciary Subcommittee on Intellectual Property in relation to Artificial Intelligence Copyright on Wednesday, July 12th, 2023.

Video and my written submission are available here: https://www.judiciary.senate.gov/artificial-intelligence-and-intellectual-property_part-ii-copyright and I have also linked to written statement here in case that other link is unavailable.

In my testimony I explained that although we are still a long way from the science fiction version of artificial general intelligence that thinks, feels, and refuses to “open the pod bay doors”, recent advances in machine learning AI raise significant issues for copyright law.

I explained why copyright law does not, and should not, recognize computer systems as authors and why training generative AI on copyrighted works is usually fair use because it falls into the category of non-expressive.

For more on copyright and generative AI, read Matthew Sag, Copyright Safety for Generative AI (Houston Law Review, Forthcoming) (https://ssrn.com/abstract=4438593)

Law School Academic Impact Rankings, with FLAIR

Cross-posted with Prawfsblog

I am pleased to announce the release of the Forward-Looking Academic Impact Rankings (FLAIR) for US law schools for 2023. I began this project two years ago because of my intense frustration that my law faculty (Loyola Chicago, at the time) had yet again been left out of the Sisk Rankings. The project has evolved and matured since then, and the design of the FLAIR rankings owes a great deal to debates that I have had with Prof. Gregory Sisk, partly in public, but mostly in private.

You can download the full draft paper from SSRN or wait for it to come out in the Florida State University Law Review.

How do the FLAIR rankings work?

I combined individual five-year citation data from HeinOnline with faculty lists scraped directly from almost 200 Law school websites to calculate the mean and median five-year citation numbers for every ABA accredited law school. Yes, that was a lot of work. Based on faculty websites, hiring announcements, and other data sources, I excluded assistant professors and faculty who began their tenure-track career in 2017 or later. I also limited the focus to what is traditionally considered to be the “doctrinal” faculty. The paper provides more details and the rationales for both of these decisions. 

How do the FLAIR rankings compare to other law school rankings?

Among their many flaws, the U.S News law school rankings rely on poorly designed, highly subjective surveys to gauge “reputational strength,” rather than looking to easily available, objective citation data that is more valid and reliable. Would-be usurpers of U.S. News use better data but make other arbitrary choices that limit and distort their rankings. One flaw common to U.S. News and those who would displace it is the fetishization of minor differences in placement that do not reflect actual differences in substance. In my view, this information is worse than trivial: it is actively misleading.

The FLAIR rankings use objective citation data that is more valid and reliable than the U.S. News surveys, and unlike the Sisk rankings, FLAIR gives every ABA accredited law school a chance have the work of its faculty considered. Obviously, it is much fairer to assess every school rather than arbitrarily excluding some based on an intuition (a demonstrably faulty intuition at that) that particular schools have no chance to ranking the top X%. Well, it’s obvious to me at least. But perhaps more importantly, looking out all the data gives us a valid context to assess individual data points. The FLAIR rankings are designed to convey relevant distinctions without placing undue emphasis on minor differences in rank that are substantively unimportant. This goes against the horserace mentality that drives so much interest in U.S. News, but I’m not here to sell anything.

What are the relevant distinctions?

The FLAIR rankings assign law faculties to four separate tiers based on how their mean and median five-year citation counts compared to the standard deviation of the means and mediums of all faculties. Tier 1 is made up of those faculties that are more than one standard deviation above the mean, Tier 2 is between zero and one standard deviations above the mean, Tier 3 ranges from the mean to half a standard deviation below, and Tier 4 includes all of the schools more than half a standard deviation below the mean. In other words, Tier 1 schools are exceptional, Tier 2 schools are above average, Tier 3 are below average, and Tier 4 are well-below average.

The figure below illustrates a boxplot for the distribution of citation counts for each tier. (There is a more complete explanation in the paper, but essentially, the middle of the boxplot is the median, the box around the median is the middle 50%, and the “whiskers” at either and are the lowest/highest 25%.) The boxplot figure below illustrates the substantial differences between the tiers, but it also underscores that there is nonetheless considerable overlap between tiers.

The FLAIR rankings

The next figure focuses on Tier 1. The FLAIR rank for each school is indicated in parentheses. The boxplot next to each school’s name indicates the distribution of citations for each doctrinal faculty member within that school.

Readers who pay close attention to the U.S. News rankings will note that the top tier consists of 23 schools, not the much vaunted “T14”. The T14 is a meaningless category; it does not reflect any current empirical reality or any substantial differences between the 14th and 15th rank. Attentive readers will also note that several schools well outside of the (hopefully now discredited concept of the) T14—namely U.C. Irvine, U.C. Davis, Emory, William & Mary, and George Washington—are in the top tier. These schools’ academic impact outpaces their overall U.S. News rankings significantly. U.C. Davis outperforms its U.S. News ranking by 42 places!

Looking at the top tier of the FLAIR rankings as visualized in the figure above also illustrates how misleading ordinal differences in ranking can be. There is very little difference between Virginia, Vanderbilt, and the University of Pennsylvania in terms of academic impact. The medians and the general distribution of each of these faculties are quite similar. And thus we can conclude that differences between ranks 6 and 8 are unimportant and that it is not news if Virginia “drops” to 8th or Pennsylvania rises to 6th in the FLAIR rankings, or indeed in the U.S. News rankings.

The differences that matter, and those that don’t

In the Olympics, third place is a bronze medal, and fourth place is nothing; but there are no medals in the legal academy and there is no difference in academic impact between third and fourth that is worth talking about. Minor differences in placement rarely correspond to differences in substance. Accordingly, rather than emphasizing largely irrelevant ordinal comparisons between schools only a few places apart, what we should really focus on is which tier in the rankings a school belongs to. Moreover, even when a difference in ranking suggests that there is a genuine difference in the overall academic impact of one faculty versus another, those aggregate differences say very little about the academic impact of individual faculty members. There is a lot of variation within faculties!

Objections to quantification

Many readers will object to any attempt to quantify academic impact, or to the use of data from HeinOnline specifically. Some of these objections make sense in relation to assessing individuals, but I don’t think that any of them retain much force when applied to assessing faculties as a whole. If we are really interested in the impact of individual scholars, we need to assess a broad range of objective evidence in context; that context comes from reading their work and understanding the field as whole. In contrast, no one could be expected to read the works of an entire faculty to get a sense of its academic influence. Indeed, citation counts, or other similarly reductive measures are the only feasible way to make between-faculty comparisons with any degree of rigor. What is more, aggregating the data at the faculty level reduces the impact of individual distortions, much like a mutual fund reduces the volatility associated with individual stocks.

One thing I should be very clear about is that academic impact is not the same thing as quality or merit. This is important because, although I think that the data can be an important tool for overcoming bias, I also need to acknowledge that citation counts will reflect the structural inequalities that pervade the legal academy. A glance at the most common first names among law school doctrinal faculty in the United States is illustrative. In order of frequency, the 15 most common first names are Michael, David, John, Robert, Richard, James, Mark, Daniel, William, Stephen, Paul, Christopher, Thomas, Andrew, and Susan. It should be immediately apparent that this group is more male and probably a lot whiter than a random sample of the U.S. population would predict. As I said, citation counts are a measure of impact, not merit. This is not a problem with citation counts as such, qualitative assessments and reputational surveys suffer the same problem. There is no objective way to assess what the academic impact of individuals or faculties would be in an alternative universe free from racism, sexism, and ableism. A better system of ranking the academic impact of law faculties will more accurately reflect the world we live in, that increased accuracy might help make the world better at the margins, but it won’t do much to fix underlying structural inequalities.

Corrections and updates

Several schools took the opportunity to email me with corrections or updates to their faculty lists in the past three months. If I receive other corrections that might meaningfully change the rankings, I will post a revised version.

Second Annual Legal Scholars Roundtable on Artificial Intelligence 2023

Call For Papers

Emory Law is proud to host the second annual Legal Scholars Roundtable on Artificial Intelligence. The Roundtable will take place on March 30-31, 2023 at Emory University in Atlanta, Georgia.  

The Legal Scholars Roundtable on Artificial Intelligence (AI) is designed to be a forum for the discussion of current legal scholarship on AI, covering a range of methodologies, topics, perspectives, and legal intersections.  

Format 

Between eight to ten papers will be chosen for discussion for Roundtable, with each paper allocated about an hour in total. Each paper will be introduced briefly by a designated commentator (5-10 minutes), with authors allowed an even briefer chance to respond (0-4 minutes), before general discussion and feedback from participants.   

Participation at the Roundtable will be limited and invitation-only. Participants are expected to read all the papers in advance and be prepared to offer substantive comments.  

Topics 

We invite applications to participate, to comment, and/or to present from academics working on any topic relating to legal issues in AI.  

Applications to present, comment, or participate 

Submissions to present can either be in the form of long abstract or a draft paper, the latter is preferred. Microsoft word format is preferred.  

The deadline for submission is February 10, 2023, and decisions on participation will be made shortly thereafter, ideally, by February 17, 2023. If selected, full papers are due March 1, 2023, to permit all participants an opportunity to read paper prior to the conference. Final submitted papers must be in substantially complete form. 

If you would like to make an early submission and request an early decision (because you need to plan for the semester), please do so.  

To apply to participate, comment, or present, please fill out the google form:­­­­­­­ https://forms.gle/7d71U5XUzp57pC7M8).  

What to expect from the Legal Scholars Roundtable on Artificial Intelligence  

The Legal Scholars Roundtable on Artificial Intelligence is a forum for the discussion of current legal scholarship on AI, spanning a range of methodologies, topics, perspectives, and legal intersections. Authors who present at the Roundtable will be selected from a competitive application process, and commentators are assigned based on their expertise.  

Participants will have an opportunity to provide direct feedback in paper sessions and will have access to draft papers but will be asked not to post papers publicly or share without author permission. Robust sessions involve energetic feedback from other paper authors, commentors, and participants. Our goal is to ensure all authors have the full participation of all workshop participants in each author’s session. 

Essential logistics 

The Roundtable will be held in person on the Emory campus in Atlanta, Georgia. The conference will begin on Thursday morning and run until 1PM on Friday. You can expect to be at the Atlanta airport by 1:30PM, in time for a 2:10PM flight or later on Friday.   

Organizers 

Matthew Sag, Professor of Law in Artificial Intelligence, Machine Learning, and Data Science at Emory University Law School ([email protected]

Charlotte Tschider, Assistant Professor at Loyola Law Chicago (guest co-convenor)  

Emory Law’s Commitment to AI 

Emory University recognizes that artificial intelligence (AI) is a transformative technology that is already reshaping almost every aspect of our lives. Through its AI.Humanity initiative, Emory is building capacity in key areas of AI research and policy, including health care, medical research, business, law, and the humanities.  

Emory Law is aggressively recruiting experts in law and AI who will impact policy and regulatory debates, advise researchers on pathways for ethical and legal AI development, and train the next generation of lawyers.  

Emory Law has long had deep expertise in IP with patent law experts Prof. Margo Bagley and Prof. Tim Holbrook, and in Law & Technology generally thanks to Professor of Practice Nicole Morris, a recognized leader at the intersection of innovation, entrepreneurship and intellectual property. Professor Matthew Sag joined Emory Law in July 2022 as the school’s first hire under the AI.Humanity initiative. Sag is an internationally recognized expert on copyright law and empirical legal studies. He is particularly well known for his pathbreaking work on the legality of using copyrighted works as inputs in machine learning processes, a vital issue in AI. Emory Law’s second AI.Humanity hire, Associate Professor Ifeoma Ajunwa will join Emory Law in the 2023 academic year. Ajunwa’s research interests are at the intersection of law and technology with a particular focus on the ethical governance of workplace technologies.  Ajunwa’s forthcoming book, “The Quantified Worker,” examines the role of technology in the workplace and its effects on management practices as moderated by employment law. Emory Law expects to hire two additional AI researchers this year who will add to our expertise in the legal and policy implications of algorithmic decision-making and in data privacy law.  

As part of its commitment to leadership in the field of law and AI, Emory Law is now the permanent home of the Legal Scholars Roundtable on Artificial Intelligence, convened by Prof. Matthew Sag. 

Lessons for Empirical Studies of Copyright Litigation … A Case Study of Copyright Injunctions

This morning I presented Lessons for Empirical Studies of Copyright Litigation … A Case Study of Copyright Injunctions, CREATe@10 – Copyright Evidence: Synthesis and Futures, University of Glasgow October 17, 2022.

For those who missed the slides, here they are!

The presentation is based on Matthew Sag and Pamela Samuelson, Discovering eBay’s Impact on Copyright Injunctions Through Empirical Evidence forthcoming in the William & Mary Law Review 2023 ( https://ssrn.com/abstract=3898460)

I have moved to Emory University School of Law

Posts on this website are infrequent these days. But I thought it was worth mentioning that I have moved to Atlanta to take a position on the amazing Emory Law faculty. I was hired as a Professor of Law in Artificial Intelligence, Machine Learning, and Data Science as part of Emory’s bold new AI.Humanity initiative.

You can read the Emory announcement here: https://law.emory.edu/news-and-events/releases/2022/04/sag_joins_emory_law.html