Drafting Law School AI Policies

There are a lot of poorly thought through AI policies out there

Law schools are realizing that they need student conduct policies that address generative AI. But after reviewing many of their policies (and some undergrad policies as well), I feel they often miss the mark. Here are five problems that crop up again and again.

First, many conflate using AI with plagiarism.

Plagiarism, properly defined, is the unacknowledged appropriation of another’s words or ideas. Violations of prohibitions on AI use, by contrast, are often better conceptualized as breaches of disclosure obligations, misrepresentation, or general academic integrity violations. While AI misuse can sometimes constitute plagiarism it is not necessarily so. Rules that lump these activities together are too blunt. There are sound reasons, sometimes to prohibit both, but they should not be conflated. Taring a wide set of AI uses with the brush of plagiarism. Is unlikely to win acceptance from students, who will reasonably see such policies as overreach.

Second, definitions are a muddle.

Many policies leave key operative terms—such as “compose,” “proofread,” “substantially edit,” or “small part”—undefined. Absent bright-line rules or illustrative examples, students and faculty are left to infer the policy’s scope, producing inconsistent enforcement and potential due process concerns.

Sweeping prohibitions on “AI use” may unintentionally extend to widely accepted tools, including spellcheckers, grammar correction software, and dictation systems. Such breadth is rarely the drafters’ intent and risks chilling legitimate academic practice. Blanket prohibitions, especially without accommodation mechanisms, may disproportionately disadvantage non-native speakers and students with disabilities who rely on technological assistance, even as comparable human support (e.g., writing centers) remains permissible. If that kind of restriction is intended, it should be express.

Third, some schools are leaning on unreliable technology to police AI use.

Recommendations to use AI detectors or plagiarism software to identify AI-generated work are problematic given their poor reliability. Without cautionary limits, such tools risk false positives and undermine due process.

It is important to understand three key limitations here:

(1) Anti-plagiarism software does not detect novel generative AI outputs;

(2) AI detectors are not reliable in the way anti-plagiarism software are reliable;

(3) AI detectors generate a large percentage of false positives. They are especially prone to do so in cases involving neurodivergent authorship or use of standard proofreading programs such as Grammarly.

Honestly, you would be better off tossing a coin, at least then you would have a realistic assessment of how far you should trust the answer.

Fourth, few schools offer clear ways for students to disclose their use of AI.

Standardized disclosure mechanisms would enhance transparency and promote consistent expectations across courses and instructors.

Fifth, the policies themselves are often inconsistent.

One policy I read takes a categorical approach on prohibiting AI use but then in a latter part of the document it suggests allowing AI “for parts of assignments” and asks instructors to clarify expectations. What?

A template for a better Law School AI policy

So, what should your AI policy look like? It should be clear, specific, comprehensive, and custom tailored for each course you teach. You can do that with the template I suggest below, just by changing the “mays” to “may nots”

I’m sure this is not perfect, but I think it’s a useful place to begin. Your use of this template is not plagiarism, I am posting it here because I think you should copy it.

Generic Law School Syllabus AI Use Policy

(1) The use of generative AI in this course is restricted but not entirely prohibited. The restrictions serve multiple, sometimes overlapping, purposes: preserving pedagogical integrity, preserving the integrity of assessment, helping you avoid plagiarism, misrepresentation, and shoddy work. These restrictions are tailored to this course, so you need to review them carefully.

(2) Key Prohibitions:

(a) In this course you are prohibited from presenting text generated by generative AI as your own in any assessable work product. This means that you may not copy-paste more than 8 consecutive words from any source without specific attribution (superficial changes designed to evade the substance of this rule will be disregarded); you may not present specific insights and ideas from external sources without specific attribution to an appropriate source. In addition, you may not include factual information or citations from generative AI that you have not verified. Work containing obvious AI “hallucinations” of citations or quotations will merit a failing grade.

(b) In addition, you may not use generative AI to develop insights and strategies for specific assigned class activities or assessable work product without specific authorization from your professor. For example, in that context you may not use generative AI

  • to review legal documents (real and simulated) for potential issues where learning to spot relevant issues is part of the skillset being taught;
  • to suggest negotiation strategies for a simulated deal where learning to develop negotiation strategies is part of the skillset being taught;
  • to practice role-playing as opposing counsel for such a simulated deal or negotiation; and
  • to identify ethical issues in a fact pattern where identifying such issues is part of the skillset being taught.

(c) You may not use generative AI to assist with answering questions presented in class in real time: if you are on-call that does not mean ChatGPT is on-call.

(3) You may use generative AI for research and source discovery provided you do so responsibly and in compliance with (2) above. Examples of acceptable uses include asking a generative AI tool for caselaw, statutes, and regulations relating to a particular topic, or to review a draft of your work product and ask for suggested additional sources or authorities.

(4) You may use generative AI to improve your work product, provided you do so responsibly and in compliance with (2) above. For example, you may use generative AI for brainstorming/ideation for essay topics, or to suggest a more logical structure for a paper; you may use generative AI to identify weaknesses in argument, counter-arguments you may have overlooked, and otherwise critically evaluate your written work. Likewise, you may use generative AI to improve your understanding of complex legal doctrines, including by asking for different types of explanations thereof, but again, provided you do so responsibly and in compliance with (2) above.

(5) You may use generative AI for detailed assistance with drafting, editing and style, provided you do so responsibly and in compliance with (2) above and with an appropriate disclosure. For example, you may draft a passage and then ask generative AI to rewrite it a particular style (law review, client email, opening argument), or to maintain a particular style but reduce the word count; you may draft a passage in a language other than English and then ask generative AI for an English translation; you may use generative AI to suggest more effective transitions and topic sentences, introductions and conclusions; you may use generative AI for suggestions as to how to more effectively integrate quotations into your main text.

The disclosure for the editorial assistance described above should be in the following form: “Approximately [10-25 |25-50 ] % of this [essay] was redrafted with the assistance of generative AI (list all), however all of the ideas and analysis are either my own or are appropriately cited.”

(6) You may use generative AI to generate images and charts in assessable work product with specific disclosure, such as a visible note in the caption or figure description: “Chart produced with [name of tool] based on [general description of prompt or underlying data]”.

(7) You may use spell check, and dictation software without any disclosure.

(8) You may use generative AI to support your learning and comprehension of course materials, provided you do so responsibly and in compliance with (2) above. For example, you may use generative AI as a tutor or a study partner, or to create flashcards, hypotheticals, explanations, quiz questions, etc.; you may use generative AI to summarize and outline course materials; you may use generative AI to suggest answers to non-assessable problem questions, or to evaluate your answers to non-assessable problem questions.

(9) Permitted uses are not necessarily recommended. Direct engagement with primary sources and your own analysis will yield the deepest learning and the most reliable work product. AI may serve as a useful complement—helping to clarify, organize, or refine ideas—but it should be employed thoughtfully and never as a substitute for the skills this course is designed to develop.

For term papers, you need a bit more

I suggest the following additional instructions.

Write in your own voice:

To avoid the impression that your work was written by a chatbot or is just a superficial rephrasing of a few original sources you must ensure that it reflects your own original analysis, voice, and understanding. Submissions that exhibit unusually advanced legal knowledge, overly polished or professional tone, highly structured policy-style formatting, or extensive use of comparative law without appropriate scaffolding may raise concerns about authorship. Likewise, papers that rely heavily on secondary authority without clear personal engagement can suggest inappropriate use of generative AI or outside assistance.

A good way to demonstrate the originality of your contribution is to explore a narrowly defined, non-obvious topic, rather than a broad or generalized theme arising from the course. A greater level of specificity usually indicates that a student has chosen a unique angle shaped by personal interest or experience.

Research, sources, and citation practices:

Good research and appropriate citation practices go hand in hand.

For most law research papers, you should prioritize primary sources and academic sources. However, for many topics in this course, you will be discussing recent trends and developments, so it will often be appropriate to cite journalistic reports and even blog posts as well. Here are some guidelines for citing propositions relating to Law, Opinion, Facts as summarized by someone else, and Specific facts.

(1) Law: If you are making an assertion about what the law is, you should generally cite case law, statutes, or academic treatises.

(2) Opinion: If you are discussing academic commentary or opinion, cite the relevant source directly.

(3)  Key arguments: If you are making an academic argument that already exists in the literature, you should identify who made that argument first. What if you can’t say for sure? If the argument is central to your thesis, put in the effort to be sure! If it is not, sometimes it will suffice to note others who have made the same point in a form such as “For arguments that …, see, for example, …”

(4) Facts as summarized by someone else: If you are referencing facts that have been summarized in academic commentary, you have some discretion as to whether to cite the academic source or go directly to primary sources for the underlying facts. Government reports and think tank publications are also useful for consolidated discussions of facts, as well as insightful commentary and analysis. In general, citing primary sources is preferable unless you are relying on an author’s summary or synthesis of multiple sources.

(5) Specific facts: Background information often comes from blogs, news articles, magazine articles, or even Wikipedia. That is fine. When using these as secondary sources, ask yourself: Is this the most direct source? Is this a reliable source? Whenever possible, prioritize more direct, reliable and authoritative sources to ensure accuracy and credibility. For example, do not cite to a blog post that summarizes an article in the NY Times, if you can read the underlying article and cite it directly.

Caution: AI summaries and dialogs with chatbots are not a reliable source of any external fact. Obviously, you can cite a ChatGPT session for a proposal like, “ChatGPT (version 4o) often recommends Kyoto when asked to suggest a random city.” But you can’t use ChatGPT as authority for the proposition that Kyoto was Japan’s capital from 794 to 1868.

Concluding thoughts

“Law schools are uniquely positioned to model thoughtful, principled engagement with new technology. A well-crafted AI policy can uphold academic integrity without stifling innovation or disadvantaging students. The goal is not to ban the future, but to teach students how to use it responsibly.”

Or that’s what ChatGPT said when I asked for suggestions on how conclude this post. I use LLMs in lots of different ways and this post benefited from long discussions with ChatGPT and with my Emory Law colleagues, but this post does not reflect the views of Emory Law, or ChatGPT for that matter.