Drafting Law School AI Policies

There are a lot of poorly thought through AI policies out there

Law schools are realizing that they need student conduct policies that address generative AI. But after reviewing many of their policies (and some undergrad policies as well), I feel they often miss the mark. Here are five problems that crop up again and again.

First, many conflate using AI with plagiarism.

Plagiarism, properly defined, is the unacknowledged appropriation of another’s words or ideas. Violations of prohibitions on AI use, by contrast, are often better conceptualized as breaches of disclosure obligations, misrepresentation, or general academic integrity violations. While AI misuse can sometimes constitute plagiarism it is not necessarily so. Rules that lump these activities together are too blunt. There are sound reasons, sometimes to prohibit both, but they should not be conflated. Taring a wide set of AI uses with the brush of plagiarism. Is unlikely to win acceptance from students, who will reasonably see such policies as overreach.

Second, definitions are a muddle.

Many policies leave key operative terms—such as “compose,” “proofread,” “substantially edit,” or “small part”—undefined. Absent bright-line rules or illustrative examples, students and faculty are left to infer the policy’s scope, producing inconsistent enforcement and potential due process concerns.

Sweeping prohibitions on “AI use” may unintentionally extend to widely accepted tools, including spellcheckers, grammar correction software, and dictation systems. Such breadth is rarely the drafters’ intent and risks chilling legitimate academic practice. Blanket prohibitions, especially without accommodation mechanisms, may disproportionately disadvantage non-native speakers and students with disabilities who rely on technological assistance, even as comparable human support (e.g., writing centers) remains permissible. If that kind of restriction is intended, it should be express.

Third, some schools are leaning on unreliable technology to police AI use.

Recommendations to use AI detectors or plagiarism software to identify AI-generated work are problematic given their poor reliability. Without cautionary limits, such tools risk false positives and undermine due process.

It is important to understand three key limitations here:

(1) Anti-plagiarism software does not detect novel generative AI outputs;

(2) AI detectors are not reliable in the way anti-plagiarism software are reliable;

(3) AI detectors generate a large percentage of false positives. They are especially prone to do so in cases involving neurodivergent authorship or use of standard proofreading programs such as Grammarly.

Honestly, you would be better off tossing a coin, at least then you would have a realistic assessment of how far you should trust the answer.

Fourth, few schools offer clear ways for students to disclose their use of AI.

Standardized disclosure mechanisms would enhance transparency and promote consistent expectations across courses and instructors.

Fifth, the policies themselves are often inconsistent.

One policy I read takes a categorical approach on prohibiting AI use but then in a latter part of the document it suggests allowing AI “for parts of assignments” and asks instructors to clarify expectations. What?

A template for a better Law School AI policy

So, what should your AI policy look like? It should be clear, specific, comprehensive, and custom tailored for each course you teach. You can do that with the template I suggest below, just by changing the “mays” to “may nots”

I’m sure this is not perfect, but I think it’s a useful place to begin. Your use of this template is not plagiarism, I am posting it here because I think you should copy it.

Generic Law School Syllabus AI Use Policy

(1) The use of generative AI in this course is restricted but not entirely prohibited. The restrictions serve multiple, sometimes overlapping, purposes: preserving pedagogical integrity, preserving the integrity of assessment, helping you avoid plagiarism, misrepresentation, and shoddy work. These restrictions are tailored to this course, so you need to review them carefully.

(2) Key Prohibitions:

(a) In this course you are prohibited from presenting text generated by generative AI as your own in any assessable work product. This means that you may not copy-paste more than 8 consecutive words from any source without specific attribution (superficial changes designed to evade the substance of this rule will be disregarded); you may not present specific insights and ideas from external sources without specific attribution to an appropriate source. In addition, you may not include factual information or citations from generative AI that you have not verified. Work containing obvious AI “hallucinations” of citations or quotations will merit a failing grade.

(b) In addition, you may not use generative AI to develop insights and strategies for specific assigned class activities or assessable work product without specific authorization from your professor. For example, in that context you may not use generative AI

  • to review legal documents (real and simulated) for potential issues where learning to spot relevant issues is part of the skillset being taught;
  • to suggest negotiation strategies for a simulated deal where learning to develop negotiation strategies is part of the skillset being taught;
  • to practice role-playing as opposing counsel for such a simulated deal or negotiation; and
  • to identify ethical issues in a fact pattern where identifying such issues is part of the skillset being taught.

(c) You may not use generative AI to assist with answering questions presented in class in real time: if you are on-call that does not mean ChatGPT is on-call.

(3) You may use generative AI for research and source discovery provided you do so responsibly and in compliance with (2) above. Examples of acceptable uses include asking a generative AI tool for caselaw, statutes, and regulations relating to a particular topic, or to review a draft of your work product and ask for suggested additional sources or authorities.

(4) You may use generative AI to improve your work product, provided you do so responsibly and in compliance with (2) above. For example, you may use generative AI for brainstorming/ideation for essay topics, or to suggest a more logical structure for a paper; you may use generative AI to identify weaknesses in argument, counter-arguments you may have overlooked, and otherwise critically evaluate your written work. Likewise, you may use generative AI to improve your understanding of complex legal doctrines, including by asking for different types of explanations thereof, but again, provided you do so responsibly and in compliance with (2) above.

(5) You may use generative AI for detailed assistance with drafting, editing and style, provided you do so responsibly and in compliance with (2) above and with an appropriate disclosure. For example, you may draft a passage and then ask generative AI to rewrite it a particular style (law review, client email, opening argument), or to maintain a particular style but reduce the word count; you may draft a passage in a language other than English and then ask generative AI for an English translation; you may use generative AI to suggest more effective transitions and topic sentences, introductions and conclusions; you may use generative AI for suggestions as to how to more effectively integrate quotations into your main text.

The disclosure for the editorial assistance described above should be in the following form: “Approximately [10-25 |25-50 ] % of this [essay] was redrafted with the assistance of generative AI (list all), however all of the ideas and analysis are either my own or are appropriately cited.”

(6) You may use generative AI to generate images and charts in assessable work product with specific disclosure, such as a visible note in the caption or figure description: “Chart produced with [name of tool] based on [general description of prompt or underlying data]”.

(7) You may use spell check, and dictation software without any disclosure.

(8) You may use generative AI to support your learning and comprehension of course materials, provided you do so responsibly and in compliance with (2) above. For example, you may use generative AI as a tutor or a study partner, or to create flashcards, hypotheticals, explanations, quiz questions, etc.; you may use generative AI to summarize and outline course materials; you may use generative AI to suggest answers to non-assessable problem questions, or to evaluate your answers to non-assessable problem questions.

(9) Permitted uses are not necessarily recommended. Direct engagement with primary sources and your own analysis will yield the deepest learning and the most reliable work product. AI may serve as a useful complement—helping to clarify, organize, or refine ideas—but it should be employed thoughtfully and never as a substitute for the skills this course is designed to develop.

For term papers, you need a bit more

I suggest the following additional instructions.

Write in your own voice:

To avoid the impression that your work was written by a chatbot or is just a superficial rephrasing of a few original sources you must ensure that it reflects your own original analysis, voice, and understanding. Submissions that exhibit unusually advanced legal knowledge, overly polished or professional tone, highly structured policy-style formatting, or extensive use of comparative law without appropriate scaffolding may raise concerns about authorship. Likewise, papers that rely heavily on secondary authority without clear personal engagement can suggest inappropriate use of generative AI or outside assistance.

A good way to demonstrate the originality of your contribution is to explore a narrowly defined, non-obvious topic, rather than a broad or generalized theme arising from the course. A greater level of specificity usually indicates that a student has chosen a unique angle shaped by personal interest or experience.

Research, sources, and citation practices:

Good research and appropriate citation practices go hand in hand.

For most law research papers, you should prioritize primary sources and academic sources. However, for many topics in this course, you will be discussing recent trends and developments, so it will often be appropriate to cite journalistic reports and even blog posts as well. Here are some guidelines for citing propositions relating to Law, Opinion, Facts as summarized by someone else, and Specific facts.

(1) Law: If you are making an assertion about what the law is, you should generally cite case law, statutes, or academic treatises.

(2) Opinion: If you are discussing academic commentary or opinion, cite the relevant source directly.

(3)  Key arguments: If you are making an academic argument that already exists in the literature, you should identify who made that argument first. What if you can’t say for sure? If the argument is central to your thesis, put in the effort to be sure! If it is not, sometimes it will suffice to note others who have made the same point in a form such as “For arguments that …, see, for example, …”

(4) Facts as summarized by someone else: If you are referencing facts that have been summarized in academic commentary, you have some discretion as to whether to cite the academic source or go directly to primary sources for the underlying facts. Government reports and think tank publications are also useful for consolidated discussions of facts, as well as insightful commentary and analysis. In general, citing primary sources is preferable unless you are relying on an author’s summary or synthesis of multiple sources.

(5) Specific facts: Background information often comes from blogs, news articles, magazine articles, or even Wikipedia. That is fine. When using these as secondary sources, ask yourself: Is this the most direct source? Is this a reliable source? Whenever possible, prioritize more direct, reliable and authoritative sources to ensure accuracy and credibility. For example, do not cite to a blog post that summarizes an article in the NY Times, if you can read the underlying article and cite it directly.

Caution: AI summaries and dialogs with chatbots are not a reliable source of any external fact. Obviously, you can cite a ChatGPT session for a proposal like, “ChatGPT (version 4o) often recommends Kyoto when asked to suggest a random city.” But you can’t use ChatGPT as authority for the proposition that Kyoto was Japan’s capital from 794 to 1868.

Concluding thoughts

“Law schools are uniquely positioned to model thoughtful, principled engagement with new technology. A well-crafted AI policy can uphold academic integrity without stifling innovation or disadvantaging students. The goal is not to ban the future, but to teach students how to use it responsibly.”

Or that’s what ChatGPT said when I asked for suggestions on how conclude this post. I use LLMs in lots of different ways and this post benefited from long discussions with ChatGPT and with my Emory Law colleagues, but this post does not reflect the views of Emory Law, or ChatGPT for that matter.

Emory Law AI Roundtable 2025

The Fourth Annual Legal Scholars Roundtable on Artificial Intelligence 2025 will be held next week at Emory Law and I am very excited by the amazing line-up of speakers and commentators we have.

AI Roundtable Papers

Neel Guha, Information in AI Regulation
Michael Goodyear, Dignity and Deepfakes
Kat Geddes, AI’s Attribution Problem
Deven Desai & Mark Riedl, Responsible AI Agents
Nikola Datzov, AI Jurisprudence: Toward Automated Justice
Yiyang Mei & Matthew Sag, The Illusion of Rights-Based AI Regulation
David Rubenstein, Federalism & Algorithms
Oren Bracha, Generative AI Two Information Goods

Some of these papers are available in draft on SSRN.com or arXiv.com, others are still in development.

AI Roundtable Keynote

We also have a special keynote from Prof. Barton Beebe, presenting his new book manuscript “Technological Change and the Beautiful Deaths of Law: A Recurring History.” The Roundtable is invitation only, Emory faculty and students who are interested in attending should contact me for details.

History of the Legal Scholars Roundtable on Artificial Intelligence

The Roundtable was founded by Professor Matthew Sag and Professor Charlotte Tschider in March 2022 as an online event (due to the Covid-19 Pandemic) and has been conducted as an annual event at Emory Law School ever since. The Roundtable is supported by Emory University School of Law and by Emory’s AI.Humanity initiative.

The following were recognized as the Roundtable’s Best Paper in their respective years: Rebecca Crootof, Margot Kaminski, & Nicholson Price, Humans in the Loop, 76 Vanderbilt Law Review 429 (2023) (Best paper of 2022); Matthew T. Wansley, Regulating Driving Automation Safety, 73 Emory Law Journal 505 (2024) (Best paper of 2023); Mark Bartholomew, A Right to Be Left Dead, 112 California Law Review 1591 (2024) (Best paper of 2024)

Copyright and the AI Action Plan

On March 14, 2025, I submitted my comments to the Office of Science and Technology Policy in relation to the “AI Action Plan”. For context, the Office of Science and Technology Policy requested input on the Development of an Artificial Intelligence (AI) Action Plan to define the priority policy actions needed to sustain and enhance America’s AI dominance, and to ensure that unnecessarily burdensome requirements do not hamper private sector AI innovation. See Exec. Order No. 14,179, 90 Fed. Reg. 8741 (Jan. 31, 2025)(Executive Order titled “Removing Barriers to American Leadership in Artificial Intelligence,” signed by President Trump).

What follows is a lightly edited version of those comments (mostly removing footnotes, but also making a couple minor improvements).

AI Action Plan, Submission to the Office of Science and Technology Policy

I am the Jonas Robitscher Professor of Law in Artificial Intelligence, Machine Learning, and Data Science, Emory University. I appreciate the opportunity to contribute to OSTP’s call for policy ideas aimed at enhancing America’s global leadership in Artificial Intelligence (AI).

My primary points in this submission are that if, contrary to precedent and sound policy, American courts rule that training AI models on copyrighted works is not permissible as fair use, the U.S. government must be ready to act. And furthermore, to maintain U.S. leadership in artificial intelligence, the AI Action Plan should explicitly affirm the importance of broad copyright exceptions—particularly fair use for nonexpressive activities like AI model training.

How copyright law in various countries deals with AI training

In The Globalization of Copyright Exceptions for AI Training my co-author Professor Peter Yu and I examine how copyright frameworks across the world have addressed the apparent tension between copyright law and copy-reliant technologies such as computational data analysis in the form of text data mining (TDM), machine learning and AI.

Our research reveals that, although the world has yet to achieve a true consensus on copyright and AI training, an international equilibrium has emerged. In this equilibrium, countries recognize that TDM, machine learning and AI training can be socially valuable and do not inherently prejudice the copyright holders’ legitimate interests. Policymakers in the European Union, Japan, Israel, and Singapore agree in general terms that such uses should therefore be allowed without express authorization in some, but not necessarily all, circumstances.

Major industrialized economies have found different ways to this equilibrium position. Some, like the U.S. and Israel have done so through the fair use doctrine. Others, like Japan, Singapore, and the European Union, have crafted express copyright exceptions for TDM and computational data analysis. Other nations where the rule of law is not so clearly established are energetically pursuing AI development with state backing without updated copyright laws to facilitate AI training. There is little doubt that if the Chinese Communist Party deems copyright law an impediment to its AI ambitions, the law in China will change almost instantaneously, and very likely retrospectively.

U.S. litigation could unsettle global AI copyright norms

American courts have historically recognized fair use protections for technologies relying on nonexpressive copying, such as reverse engineering, plagiarism detection software, digital library searches, and computational humanities research spanning millions of scanned texts. Extending this principle logically, training AI models—which similarly involves copying without directly reproducing expressive content—would usually qualify as fair use. (For citations and discussion of the relevant literature, see Matthew Sag, Fairness and Fair Use in Generative AI, 92 Fordham Law Review 1887 (2024))

Yet, plaintiffs in more than 30 ongoing lawsuits across U.S. district courts contest this view. Collectively, they seek injunctions barring AI training without explicit consent, billions in monetary compensation, and even destruction of existing AI models. Although, in my estimation and that of many copyright experts, the plaintiffs are should not prevail on sweeping arguments that would bring AI training in the U.S. to a halt, they might.

A bad court decision may drive AI innovation offshore

Adverse outcomes in U.S. litigation will not stop the development of AI, they will simply push AI innovation overseas. The reason is straightforward: AI models, once trained, are easily portable. Companies seeking to avoid restrictive copyright rules could simply move their training operations to innovation-friendly jurisdictions like Singapore, Israel, or Japan, and then serve U.S. customers remotely, entirely free of domestic copyright concerns.

How is this possible? AI developers need fair use for all the copying that takes place to make training possible, but they don’t need fair use once the models have been trained because, by-and-large, trained AI models do not replicate the expressive details of their training datasets; instead, they distill general patterns, abstractions, and insights from that training data.

Thus, in the eyes of copyright law, these models are neither copies nor derivative works based on the training data. If U.S. copyright law turns against our AI industry, companies in the U.S. will still be able to use models trained in AI-friendly jurisdictions by either setting up a data pipeline so that the model stays overseas or hosting their models in the United States once it has been trained. Consequently, imposing overly restrictive copyright interpretations domestically will do very little to turn back the tide on AI, but risks surrendering America’s AI advantage to more AI-friendly jurisdictions.

Licensing deals are no substitute for fair use

While licensing agreements between AI developers and media companies are becoming more common, they cannot solve copyright concerns surrounding AI training. The sheer scale of AI training data makes the licensing approach impractical at the cutting edge. For instance, Meta’s recent Llama 3 model consumed over 15 trillion (15,000,000,000,000) tokens drawn from publicly accessible sources. To put this into perspective, assuming that the New York Times print edition is roughly fifty pages per day, each page has 4000 words (this is probably way over!), and there are 1.3 tokens per word, the newspaper would generate roughly 1.82 million tokens per week. At that rate, it would take about 158,500 years for the New York Times to generate 15 trillion tokens.

Licensing may be possible for some AI training, but licensing at the scale required to train frontier LLMs is not a realistic foundation for American industrial policy, it is a fantasy.

Nevertheless, existing deals with major media companies illustrate something important: AI developers are willing to pay for efficient access to high-quality datasets otherwise locked behind paywalls or machine-readable restrictions. Such agreements suggest that licensing has a niche but crucial role—not as a substitute for broad exceptions like fair use, but rather as a complementary source of premium training data. This dynamic becomes particularly valuable in AI-powered search scenarios, where language models frequently generate outputs closely resembling original copyrighted content, pushing the boundaries between acceptable use and potential infringement.

The U.S. Government must be ready to act

If, contrary to precedent and sound policy in my view, American courts rule that training AI models on copyrighted works is not permissible as fair use, the U.S. government should act. Specifically, the government would need to introduce legislation to reinstate the principle that training AI models typically falls under fair use or create a specific statutory exemption. I see no way this could be done through agency rulemaking or executive action. Legislative intervention would be necessary to safeguard America’s competitive edge against innovation-friendly jurisdictions like Japan, Singapore, Israel, and, in this context, even the European Union.

To maintain U.S. leadership in artificial intelligence, the AI Action Plan should explicitly affirm the importance of broad copyright exceptions—particularly fair use for nonexpressive activities like AI model training.

London Marathon!

On April 27, 2025, I will be running the London Marathon to raise money for research and treatment of pancreatic cancer.

The future is what we make it

My sister Rebecca was diagnosed with pancreatic cancer in 2018. She was as brave as anyone could be, but her battle was short-lived; Becky passed away just weeks after her diagnosis. Her story is typical, pancreatic cancer is almost always fatal because it is diagnosed too late.

But we can change this story.

I am asking you to help me raise money for Pancreatic Cancer UK which is pioneering efforts to develop earlier detection methods to give others the fighting chance that Becky deserved.

There are several different ways you can contribute

(1) donate directly to Pancreatic Cancer UK at https://2025tcslondonmarathon.enthuse.com/pf/matthew-sag (immediate impact)

(2) send me money via venmo that I will then aggregate and convert into GBP and donate in your name (@Matthew-Sag) (minimize foreign transaction fees)

(3) donate to a different pancreatic cancer in your country of choice, send me the details and I’ll match that with a donation to Pancreatic Cancer UK (local impact)

Every contributor gets to add a song to my London Marathon Spotify Playlist!

If you donate for this cause, you can let me know what song I should add to my London Marathon Spotify Playlist. I will set this to shuffle during the race and try to remember who suggested which song

Please join me 

Please join me on this journey and donate in Rebecca’s memory and bringing us closer to a future where no more lives are stolen by this devastating disease.

Follow my progress

To see how my training is going, follow me on Strava, check out my google spreadsheet (https://docs.google.com/spreadsheets/d/16hV2e-IxXbo01uM7_X5tGyOz6ONPtSZGJZdjKUeVH0k/edit?usp=sharing) or get narrative updates  here on this page (https://2025tcslondonmarathon.enthuse.com/pf/matthew-sag)

Playlist so far …

Queen, Don’t Stop Me Now (my pick)

The Weekend, Blinding Lights (my pick)

Monty Python, Always Look on the Bright Side of Life (Matt & Mindy Lawrence)

The Clash, London Calling (Spencer Waller)

Bruce Springsteen, Born to Run (Richard Fields)

Olivia Newton-John, Xanadu (Jo Groube)

The Beatles, Long and Winding Road (Jan & Andy Sag)

Third Annual Legal Scholars Roundtable on Artificial Intelligence 2024,

Call For Papers

Roundtable

Emory Law is proud to host the third annual Legal Scholars Roundtable on Artificial Intelligence. The Roundtable will take place on April 11-12, 2024 at Emory University in Atlanta, Georgia. The Legal Scholars Roundtable on Artificial Intelligence (AI) is designed to be a forum for the discussion of current legal scholarship on AI, covering a range of methodologies, topics, perspectives, and legal intersections.

Format  
Participation at the Roundtable will be limited and invitation-only. Participants are expected to read all the papers in advance and be prepared to offer substantive comments. We will try to accommodate a limited number of Zoom-based participants, but in person attendance is strongly preferred.

Applications to present, comment, or participate
We invite applications to participate, to comment, and/or to present from academics working on any topic relating to legal issues in AI. To request to present, you need to submit a substantially complete draft paper. Microsoft word format is strongly preferred for these purposes, but you can submit a pdf version for broader distribution. The deadline for submission is February 23, 2024, and decisions on participation will be made shortly thereafter, ideally, by March 4, 2024. If selected, final manuscripts are due April 1, 2024, to permit all participants an opportunity to read the papers prior to the conference.

To apply to participate, comment, or present, please fill out the google form:(https://forms.gle/Ubv2maLWfMK5tbPs8).

What to expect from the Legal Scholars Roundtable on Artificial Intelligence
The Legal Scholars Roundtable on Artificial Intelligence is a forum for the discussion of current legal scholarship on AI, spanning a range of methodologies, topics, perspectives, and legal intersections. Authors who present at the Roundtable will be selected from a competitive application process, and commentators are assigned based on their expertise. Participants will have an opportunity to provide direct feedback in paper sessions and will have access to draft papers but will be asked not to post papers publicly or share without author permission. Robust sessions involve energetic feedback from other paper authors, commentators, and participants. Our goal is to ensure all authors have the full participation of all workshop participants in each author’s session.

Essential logistics
The Roundtable will be held in person on the Emory campus in Atlanta, Georgia. The conference will begin on Thursday morning and run until 1PM on Friday. You can expect to be at the Atlanta airport by 1:45 PM, in time for a 2:30 PM flight or later on Friday. We will pay for your reasonable (economy) travel and accommodation expenses within the U.S. At the roundtable you will be well fed and caffeinated.

Organizers
Matthew Sag, Professor of Law in Artificial Intelligence, Machine Learning, and Data Science at Emory University Law School (msag@emory.edu)
Charlotte Tschider, Associate Professor at Loyola Law Chicago (ctschider@luc.edu)

Emory Law’s Commitment to AI
Emory University recognizes that artificial intelligence (AI) is a transformative technology that is already reshaping almost every aspect of our lives. Through its AI.Humanity initiative, Emory is building capacity in key areas of AI research and policy, including health care, medical research, business, law, and the humanities.

Second Annual Legal Scholars Roundtable on Artificial Intelligence 2023

Call For Papers

Emory Law is proud to host the second annual Legal Scholars Roundtable on Artificial Intelligence. The Roundtable will take place on March 30-31, 2023 at Emory University in Atlanta, Georgia.  

The Legal Scholars Roundtable on Artificial Intelligence (AI) is designed to be a forum for the discussion of current legal scholarship on AI, covering a range of methodologies, topics, perspectives, and legal intersections.  

Format 

Between eight to ten papers will be chosen for discussion for Roundtable, with each paper allocated about an hour in total. Each paper will be introduced briefly by a designated commentator (5-10 minutes), with authors allowed an even briefer chance to respond (0-4 minutes), before general discussion and feedback from participants.   

Participation at the Roundtable will be limited and invitation-only. Participants are expected to read all the papers in advance and be prepared to offer substantive comments.  

Topics 

We invite applications to participate, to comment, and/or to present from academics working on any topic relating to legal issues in AI.  

Applications to present, comment, or participate 

Submissions to present can either be in the form of long abstract or a draft paper, the latter is preferred. Microsoft word format is preferred.  

The deadline for submission is February 10, 2023, and decisions on participation will be made shortly thereafter, ideally, by February 17, 2023. If selected, full papers are due March 1, 2023, to permit all participants an opportunity to read paper prior to the conference. Final submitted papers must be in substantially complete form. 

If you would like to make an early submission and request an early decision (because you need to plan for the semester), please do so.  

To apply to participate, comment, or present, please fill out the google form:­­­­­­­ https://forms.gle/7d71U5XUzp57pC7M8).  

What to expect from the Legal Scholars Roundtable on Artificial Intelligence  

The Legal Scholars Roundtable on Artificial Intelligence is a forum for the discussion of current legal scholarship on AI, spanning a range of methodologies, topics, perspectives, and legal intersections. Authors who present at the Roundtable will be selected from a competitive application process, and commentators are assigned based on their expertise.  

Participants will have an opportunity to provide direct feedback in paper sessions and will have access to draft papers but will be asked not to post papers publicly or share without author permission. Robust sessions involve energetic feedback from other paper authors, commentors, and participants. Our goal is to ensure all authors have the full participation of all workshop participants in each author’s session. 

Essential logistics 

The Roundtable will be held in person on the Emory campus in Atlanta, Georgia. The conference will begin on Thursday morning and run until 1PM on Friday. You can expect to be at the Atlanta airport by 1:30PM, in time for a 2:10PM flight or later on Friday.   

Organizers 

Matthew Sag, Professor of Law in Artificial Intelligence, Machine Learning, and Data Science at Emory University Law School (msag@emory.edu) 

Charlotte Tschider, Assistant Professor at Loyola Law Chicago (guest co-convenor)  

Emory Law’s Commitment to AI 

Emory University recognizes that artificial intelligence (AI) is a transformative technology that is already reshaping almost every aspect of our lives. Through its AI.Humanity initiative, Emory is building capacity in key areas of AI research and policy, including health care, medical research, business, law, and the humanities.  

Emory Law is aggressively recruiting experts in law and AI who will impact policy and regulatory debates, advise researchers on pathways for ethical and legal AI development, and train the next generation of lawyers.  

Emory Law has long had deep expertise in IP with patent law experts Prof. Margo Bagley and Prof. Tim Holbrook, and in Law & Technology generally thanks to Professor of Practice Nicole Morris, a recognized leader at the intersection of innovation, entrepreneurship and intellectual property. Professor Matthew Sag joined Emory Law in July 2022 as the school’s first hire under the AI.Humanity initiative. Sag is an internationally recognized expert on copyright law and empirical legal studies. He is particularly well known for his pathbreaking work on the legality of using copyrighted works as inputs in machine learning processes, a vital issue in AI. Emory Law’s second AI.Humanity hire, Associate Professor Ifeoma Ajunwa will join Emory Law in the 2023 academic year. Ajunwa’s research interests are at the intersection of law and technology with a particular focus on the ethical governance of workplace technologies.  Ajunwa’s forthcoming book, “The Quantified Worker,” examines the role of technology in the workplace and its effects on management practices as moderated by employment law. Emory Law expects to hire two additional AI researchers this year who will add to our expertise in the legal and policy implications of algorithmic decision-making and in data privacy law.  

As part of its commitment to leadership in the field of law and AI, Emory Law is now the permanent home of the Legal Scholars Roundtable on Artificial Intelligence, convened by Prof. Matthew Sag. 

I have moved to Emory University School of Law

Posts on this website are infrequent these days. But I thought it was worth mentioning that I have moved to Atlanta to take a position on the amazing Emory Law faculty. I was hired as a Professor of Law in Artificial Intelligence, Machine Learning, and Data Science as part of Emory’s bold new AI.Humanity initiative.

You can read the Emory announcement here: https://law.emory.edu/news-and-events/releases/2022/04/sag_joins_emory_law.html

Legal Scholars Roundtable on Artificial Intelligence: Call for Papers

Loyola University of Chicago is proud to present the first annual Legal Scholars Roundtable on Artificial Intelligence. The Legal Scholars Roundtable on Artificial Intelligence will take place online on March 18, 2022.

The Legal Scholars Roundtable on Artificial Intelligence is designed to be a forum for the discussion of current legal scholarship on AI, covering a range of methodologies, topics, perspectives, and legal intersections.

Between four and eight papers will be chosen for discussion for this inaugural roundtable, with each paper allocated up to an hour for discussion. Each paper will be introduced briefly by a designated commentator (3-8 minutes), with authors allowed an even briefer chance to respond (0-4 minutes), before general discussion.  

The Roundtable will be held Friday, March 18, 2022 beginning at 10:00AM Central Time to accommodate participants on the West Coast until 5:00 PM Central Time.

Participation at the Roundtable will be limited and invitation-only and participants are expected to have read the papers of other participants in advance and be prepared to offer substantive comments.

We invite applications to participate, to comment, and/or to present from academics working on any topic relating to legal issues in artificial intelligence including:

    • Competition/Antitrust
    • Consumer protection/regulatory law
    • Contract law
    • Corporations law
    • Criminal justice
    • Cybersecurity
    • Data privacy
    • Discrimination
    • Health law
    • Intellectual property
    • Tort law

To present, submissions must be substantially complete drafts in Microsoft word format. The deadline for submission is Friday, February 11, 2021 and decisions on participation will be made shortly thereafter, ideally, by February 18, 2022.

We anticipate this Legal Scholars Roundtable on Artificial Intelligence will bring together a diverse intellectual community, and we plan to sustain that community with a series of in-person and online conferences in the coming years. We invite you to be part of this inaugural event!

To apply to participate, comment, or present, please fill out the google form (https://forms.gle/yhXANrTAWHcJciHk9). Those wishing to present should also email their papers to msag@luc.edu. A subject line of “Legal Scholars AI 2022” would be helpful.  

The Roundtable will be convened by Loyola Chicago Professors, Matthew Sag and Charlotte Tschider. Matthew is a leading expert on the copyright implications of text data mining in the machine learning and AI context. Charlotte’s scholarship focuses on the implications of information privacy, cybersecurity, and artificial intelligence for the global health care industry. For further information about the Roundtable, please email either: Matthew Sag (msag@luc.edu) or Charlotte Tschider (ctschider@luc.edu).

#LegalScholarsAI

#LegalScholarsAI2022

NEH grant awarded to build legal literacies for text data mining

I am thrilled to share the news that the National Endowment for the Humanities (NEH) has awarded a $165,000 grant to a team of legal experts, librarians, and scholars who will help humanities researchers and staff navigate complex legal questions in cutting-edge digital research. The team is led by UC Berkeley, but involves several other leading universities, including Loyola Law Chicago.

The NEH has agreed to support an Institute for Advanced Topics in the Digital Humanities to help key stakeholders learn to better navigate legal issues in text data mining. Thanks to the NEH’s $165,000 grant,  a national team (identified below) from more than a dozen institutions and organizations will run a summer institute to teach humanities researchers, librarians, and research staff how to confidently navigate the major legal issues that arise in text data mining research. 

Our institute is aptly called Building Legal Literacies for Text Data Mining (Building LLTDM), and will run from June 23-26, 2020 in Berkeley, California

Rachael Samberg of UC Berkeley Library’s Office of Scholarly Communication Services was our fearless leader in the grant proposal, Rachael’s amazing leadership and dedication can’t be overstated! More details on the grant can be found in Rachael Samberg’s post. But to give you some idea of the significance of this grant, here are a few comments from team members:

Building LLTDM team member Matthew Sag, a law professor at Loyola University Chicago School of Law and leading expert on copyright issues in the digital humanities, said he is “excited to have the chance to help the next generation of text data mining researchers open up new horizons in knowledge discovery. We have learned so much in the past ten years working on HathiTrust [a text-minable digital library] and related issues. I’m looking forward to sharing that knowledge and learning from others in the text data mining community.” 

Team member Brandon Butler, a copyright lawyer and library policy expert at the University of Virginia, said, “In my experience there’s a lot of interest in these research methods among graduate students and early-career scholars, a population that may not feel empowered to engage in “risky’ research. I’ve also seen that digital humanities practitioners have a strong commitment to equity, and they are working to build technical literacies outside the walls of elite institutions. Building legal literacies helps ease the burden of uncertainty and smooth the way toward wider, more equitable engagement with these research methods.”

Kyle K. Courtney of Harvard University serves as Copyright Advisor at Harvard Library’s Office for Scholarly Communication, and is also a Building LLTDM team member. Courtney added, “We are seeing more and more questions from scholars of all disciplines around these text data mining issues. The wealth of full-text online materials and new research tools provide scholars the opportunity to analyze large sets of data, but they also bring new challenges having to do with the use and sharing not only of the data but also of the technological tools researchers develop to study them. I am excited to join the Building LLTDM team and help clarify these issues and empower humanities scholars and librarians working in this field.”

Megan Senseney, Head of the Office of Digital Innovation and Stewardship at the University of Arizona Libraries reflected on the opportunities for ongoing library engagement that extends beyond the initial institute. Senseney said that, “Establishing a shared understanding of the legal landscape for TDM is vital to supporting research in the digital humanities and developing a new suite of library services in digital scholarship. I’m honored to work and learn alongside a team of legal experts, librarians, and researchers to create this institute, and I look forward to integrating these materials into instruction and outreach initiatives at our respective universities.”

Team Members

  • Rachael G. Samberg (University of California, Berkeley) (Project Director)
  • Scott Althaus (University of Illinois, Urbana-Champaign)
  • David Bamman (University of California, Berkeley)
  • Sara Benson (University of Illinois, Urbana-Champaign)
  • Brandon Butler (University of Virginia)
  • Beth Cate (Indiana University, Bloomington)
  • Kyle K. Courtney (Harvard University)
  • Maria Gould (California Digital Library)
  • Cody Hennesy (University of Minnesota, Twin Cities)
  • Eleanor Koehl (University of Michigan)
  • Thomas Padilla (University of Nevada, Las Vegas; OCLC Research)
  • Stacy Reardon (University of California, Berkeley)
  • Matthew Sag (Loyola University Chicago)
  • Brianna Schofield (Authors Alliance)
  • Megan Senseney (University of Arizona)
  • Glen Worthey (Stanford University)