Today, December 11, 2025, OpenAI and Disney announced a partnership that essentially signals a marriage between generative AI and legacy media. Although some kind of deal was inevitable, the range and scope of this one are striking. Disney is sinking $1 billion into OpenAI for an equity stake and warrants, while simultaneously inking a three-year licensing deal.
The immediate result? OpenAI’s Sora and ChatGPT will legally ingest over 200 marquee characters from the Disney, Marvel, Pixar, and Star Wars vaults. We’ll see AI-generated Disney content on Disney+, and Disney employees will get enterprise-grade access to OpenAI’s tools. Notably, actor likenesses are off the table—a nod to the sensitivities of the recent labor strikes—but the direction of travel is clear. For more reporting, see the Verge.
AI companies and copyright industries are beginning to understand, and become reconciled to, the fact that neither side is going to score an absolute victory when it comes to the fair use issue for AI training. AI training that results in a model that learns from, but does not reproduce, the training data looks very likely to be upheld as fair use. Two recent cases held as much on summary judgement and this aligns with a line of precedent “nonexpressive use” cases that predate generative AI.
However, it’s becoming increasingly clear that it’s hard to train generative AI models to be really useful without some degree of memorization of the training data along the way. This is particularly problematic when it comes to copyrightable characters, because copyright protects characters more abstractly than most things. This is the well-known Snoopy problem (a term I coined in 2023).
Faced with this increasingly clear reality, it makes sense for consumer facing AI companies and entertainment Giants like Disney to think about licensing arrangements.
This deal signals a retreat from the fair use absolutism of early AI development. OpenAI and Disney have effectively priced the risk of memorization. Instead of spending the next decade in discovery arguing over pixel similarities, they are moving to a licensing regime. Disney gets paid and retains control; OpenAI gets legal certainty and the ability to serve the entertainment industry without looking over its shoulder.
Capital Crunch?
With competitors like Anthropic eyeing public listings, OpenAI’s decision to take strategic capital from a corporate giant like Disney may be telling. It suggests we are hitting a saturation point for traditional venture capital at the scale these foundation models require. It also hints that OpenAI sees more value in “smart money,” than in the volatility of the public markets. Disney isn’t just a piggy bank; it’s a hedge. By entangling itself with the world’s premier IP holder, OpenAI makes itself indispensable to the very industry that threatened to sue it out of existence. Or, I’m sure that’s the theory, whether it pans out that way remains to be seen.
The End of the Scaling Era?
Finally, this move also adds to the “Data Scarcity” thesis. The era of simply scraping the open web to make models smarter (2017–2025) might be over. The low-hanging fruit of the public internet has been picked, processed, recycled into synthetic data, and processed again, every which way you can imagine. To get better, and to stay ahead of open source rivals, companies like OpenAI are going to need access to data that no one else has. Google has YouTube; OpenAI now has the Magic Kingdom.
The Bottom Line
This is the template for the future. We are moving away from total war between AI and Content, toward a negotiated partition of the world. The tech companies provide the engine; the media giants provide the fuel. And for now, at least, both sides seem to think that’s a better outcome than leaving it up to a judge.
I wrote this blog post the morning the deal was announced, because it fits surprisingly well with a Law Review article I am writing, “The Snoopy Solution: How Fair Use and Licensing for Generative AI Can Coexist” based on a talk I gave at Yale last month.
Judge Stein’s Order Denying OpenAI’s Motion to Dismiss in Authors Guild v. OpenAI, Inc., No. 25-md-3143 (SHS) (OTW) (S.D.N.Y. Oct. 27, 2025)
A new ruling in Authors Guild v. OpenAI has major implications for copyright law, well beyond artificial intelligence. On October 27, 2025, Judge Sidney Stein of the Southern District of New York denied OpenAI’s motion to dismiss claims that ChatGPT outputs infringed the rights of authors such as George R.R. Martin and David Baldacci. The opinion suggests that short summaries of popular works of fiction are very likely infringing (unless fair use comes to the rescue).
This is a fundamental assault on the idea, expression, distinction as applied to works of fiction. It places thousands of Wikipedia entries in the copyright crosshairs and suggests that any kind of summary or analysis of a work of fiction is presumptively infringing.
A white walker in a desolate field reading Wikipedia (an AI Image by Gemini)
Copyright and derivative works
In Penguin Random House LLC v. Colting, the Southern District of New York found that defendant’s “The Kinderguide” series, which condensed classic works of literature into children’s books, infringed the copyrights in the original works despite being marketed as educational tools for parents to introduce literature to young children.
Every year, I ask students in my copyright class why the children’s versions of classic novels in Colting were found to be infringing but a Wikipedia summary of the plots of those same books probably wouldn’t be. A recent ruling in the consolidated copyright cases against OpenAI means I might have to reconsider.
The ruling
On October 27, 2025, Judge Stein of the Southern District of New York denied OpenAI’s motion to dismiss the output-based copyright infringement claims brought by a class of authors including David Baldacci, George R.R. Martin, and others.
OpenAI had argued, reasonably enough, that the authors’ complaint failed to plausibly allege substantial similarity between any of their works and any of ChatGPT’s outputs. It is standard practice in copyright litigation to attach a copy of the plaintiff’s work and the allegedly infringing work, but the court held that “the outputs plaintiffs submitted along with their opposition to OpenAI’s motion were incorporated into the Consolidated Class Action Complaint by reference” and that it was enough that their Complaint repeatedly made “clear, definite and substantial references” to the outputs. Losing that civil procedure skirmish was probably a bad sign for OpenAI—a bit like the menacing prologue in A Game of Thrones, you sense that Copyright Winter is Coming .
Judge Stein then went on to evaluate one of the more detailed chat-GPT generated summaries relating to A Game of Thrones, the 694 page novel by George R. R. Martin which eventually became the famous HBO series of the same name. Even though this was only a motion to dismiss, where the cards are stacked against the defendant, I was surprised by how easily the judge could conclude that:
“A more discerning observer could easily conclude that this detailed summary is substantially similar to Martin’s original work, including because the summary conveys the overall tone and feel of the original work by parroting the plot, characters, and themes of the original.”
The judge described the ChatGPT summaries as:
“most certainly attempts at abridgment or condensation of some of the central copyrightable elements of the original works such as setting, plot, and characters”
He saw them as:
“conceptually similar to—although admittedly less detailed than—the plot summaries in Twin Peaks and in Penguin Random House LLC v. Colting, where the district court found that works that summarized in detail the plot, characters, and themes of original works were substantially similar to the original works.” (emphasis added).
To say that the less than 580-word GPT summary of A Game of Thrones is “less detailed” than the 128-page Welcome to Twin Peaks Guide in the Twin Peaks case, or the various children’s books based on famous works of literature in the Colting case, is a bit of an understatement.
The Wikipedia comparison
To see why the latest OpenAI ruling is so surprising, it helps to compare the ChatGPT summary of A Game of Thrones to the equivalent Wikipedia plot summary. I read them both so you don’t have to.
The ChatGPT summary of a Game of Thrones is about 580 words long and captures the essential narrative arc of the novel. It covers all three major storylines: the political intrigue in King’s Landing culminating in Ned Stark’s execution (spoiler alert), Jon Snow’s journey with the Night’s Watch at the Wall, and Daenerys Targaryen’s transformation from fearful bride (more on this shortly) to dragon mother across the Narrow Sea. In this regard, it is very much like the 800 word Wikipedia plot summary. Each summary presents the central conflict between the Starks and Lannisters, the revelation of Cersei and Jaime’s incestuous relationship, and the key plot points that set the larger series in motion.
I could say more about their similarities, but I’m concerned that if I explored the summaries in any greater detail, the Authors Guild might think that I am also infringing George R. R. Martin’s copyright, so I’ll move on to the minor differences.
The key difference between the Wikipedia summary and the GPT summary is structural. The Wikipedia summary takes a geographic approach, dividing the narrative into three distinct sections based on location: “In the Seven Kingdoms,” “On the Wall,” and “Across the Narrow Sea.” This structure mirrors the way the novel follows different characters in different locations, to the point where you begin to wonder whether these characters will ever meet. In contrast, the GPT summary follows a more analytical structure, beginning with contextual information about the setting and the series as a whole, then proceeding through sections that follow a roughly chronological progression through the major plot points.
There are some minor differences. The Wikipedia summary provides more granular plot details and clearer causal chains between events. It explains, for instance, how Catelyn’s arrest of Tyrion leads to Tywin’s retaliatory raids on the Riverlands, which in turn necessitates Robb’s strategic alliance with House Frey to secure a crucial bridge crossing. The Wikipedia summary also includes more secondary characters and subplots, such as Tyrion’s recruitment of Bronn as his champion in trial by combat, and Jon’s protection of Samwell Tarly.
The Wikipedia summary probably assumes a greater familiarity with the fantasy genre, whereas the GPT summary might be more helpful to the uninitiated. The GPT summary explains the significance of the long summer and impending winter and explicitly sets out the novel’s major themes.
In broad strokes, however, there is very little daylight between these two summaries. They are remarkably similar in what they include and in what they leave out. Most notably, both summaries sanitize Daenerys’s storyline by omitting the sexual violence that is fundamental to her character arc. This is particularly striking because sexual violence is central to Martin’s narrative in so many places and to the narrative arc of several of the main characters.
If GPT is substantially similar, so is Wikipedia
I don’t see how the ChatGPT summary could infringe the copyright in George R. R. Martin’s novel, if the Wikipedia summary doesn’t. A chilling prospect indeed, but I don’t think that either one is infringing.
It’s absolutely true that you can infringe the copyright in a novel by merely borrowing some of the key characters, plot points and settings, and spinning out a sequel or a prequel. In copyright, we call this a derivative work. But just because sequels and children’s versions of novels are often infringing, doesn’t mean that a dry and concise analytical summary of a novel is infringing.
Why not? It’s actually the act of taking those key structural elements, the skeleton of the novel if you like, and adding new flesh to them to create a new fully realized work that makes an unauthorized sequel infringing.
What’s at stake
Judge Stein’s order doesn’t resolve the authors’ claims, not by a long shot. And he was careful to point out that he was only considering the plausibility of the infringement allegation and not any potential fair use defenses. Nonetheless, I think this is a troubling decision that sets the bar on substantial similarity far too low.
The fact that “[w]hen prompted, ChatGPT can generate accurate summaries of books authored by plaintiffs and generate outlines for potential sequels to plaintiffs’ books” falls well short of demonstrating that such outputs by themselves would be regarded by the ordinary observer as substantially similar to a fully realized novel.
As of October 2025, Suno and Udio are two text-to-music AI platforms that let users create full songs—including lyrics, vocals, and artwork—simply by entering text prompts. Some of this music is unappealing, even to its creators (protagonists?), but music scene insiders have assured me that some of the music emanating from these platforms is good enough to provoke a wistful, “I wish I had written that.”
AI music is also becoming more popular. A recent article in The Economist (of all places) recounts the viral success of “Country Girls Make Do,” a raunchy parody country song generated by artificial intelligence under the pseudonym Beats By AI. The song apparently features on TikTok where users prank the unsuspecting by playing it under false pretenses.
This is more than a one off. Acts such as Aventhis and The Velvet Sundown, also AI-based, have attracted hundreds of thousands of monthly listeners on Spotify. These tools allow for rapid and prolific production: Beats By AI reportedly releases a new song every day. This is not simply a case of streaming fraud where AI slop steals music plays from real artists by adopting confusing names—Spotify recently removed 75 million such tracks, citing “bad actors” flooding the platform with low-quality content. Some people at least, like some AI music. The Economist reports a Luminate survey finding that, one-third of Americans accept AI-written instrumentals, nearly 30% are fine with AI lyrics, and over a quarter do not mind AI vocals.
No music stands alone, but AI music arguably even less so
The appeal of these tracks lies partly in their echos of established genres and tropes, with a dash of irony and experimentation thrown in. It’s to be seen whether this portends a consumer-driven revolution in content creation where listeners generate their own entertainment rather than relying on record labels.
What does this mean for copyright law?
Although the Copyright Office would not regard works of The Velvet Sundown or Beats By AI as copyrightable, Spotify seems happy to royalties for AI music, provided the works themselves (as opposed to the copying that fed the AI process that created the works) don’t infringe on other artists songs.
AI music may destabilize entrenched business models at the fringes, but it might also foster broader participation and new forms of cultural expression. Does AI pose the same threat to the economic and cultural standing of musicians as it does to stock photography and digital art? Or will AI-generated music remain a hybrid layer within popular culture that feeds off and refers back to mainstream music without replacing the central role of human creation? If so, perhaps at least some country girls will make do.
Generative AI poses a puzzle for copyright lawyers, and many others besides. How can a soulless mechanical process lead to the creation of new expression, seemingly out of nothing, or if not nothing, very little?
This essay will help you understanding where the apparent creativity in generative AI outputs comes from, why a lot of AI works are not copyrightable, and why the outputs of generative AI are mostly very different to the works those AIs were trained on.
Who is the author of Skater Beagle?
The image below was created by one LLM (Google Gemini) using a long prompt written by another LLM (Anthropic’s Claude) following the instruction “draft a prompt for an arresting image of a beagle on skateboard.”
AI generated “arresting image of a beagle on skateboard.” From a low angle, a joyful beagle with ears flying expertly rides a skateboard down a steep urban hill during a cinematic, “golden hour” sunset. A city skyline is backlit by the setting sun.
If I took this photo in real life, I would be recognized as the author. Likewise, if I painted it as a picture. But because the image was created by a process that involved very little direct human contribution, it is uncopyrightable. For many people, this seems odd. How can an image that looks creative not be recognized as copyrightable, just because it was created with AI rather than an iPhone camera, or a set of water-based paints? After all, artists use tools to make art all the time?
No copyright for the AI
The first question to address is whether Google’s image generation model is the author of Skater Beagle. The answer is no, for many reasons, but let’s focus on the copyright issues, because they are the most interesting.
The AI can’t get copyright protection because the AI itself is not creative in any of the ways we generally understand that term (at least if you are a copyright lawyer) because it lacks any desire or intention to express. In Burrow-Giles Lithographic Co. v. Sarony (1884) the U.S. Supreme Court recognized that a photograph could be copyrighted, but only because the photographer’s creative choices made the image an “original intellectual conception[] of the author” rather than a mere mechanical capture. LLMs are impressive, but they don’t have any intentions separate from the math that makes them predict one thing and not another. LLMs don’t have original intellectual conception they are trying to express.
No copyright for the simple prompt engineer
If not the AI, then maybe the person who writes the prompts should be credited with the resulting expression? After all, isn’t choosing the right words in the prompt a creative act?
That doesn’t work either. Sure, choosing the right words in the prompt might be creative in some senses, but copyright law doesn’t protect creativity in the sense of “hey, that’s a good idea,”— it protects creativity that manifests in original expression. This idea-expression distinction is one of the foundations of copyright law. Copyright attaches to the final expression, not the upstream idea or instruction that triggered it. Even if you think my idea to get one LLM to write a prompt for another LLM “for an arresting image of a beagle on skateboard” is creative, its really just a simple idea and nothing copyrightable.
Surely, it must be one or the other?
But still, many would say, if Skater Beagle exhibits all the tell-tale signs of subjective creative authorship, that creativity must come from somewhere. So it’s either the AI or the person who wrote the prompt?
This line of thinking is half right, the generative AI is doing something important, it’s creating something from nothing, but its not “creativity” in the relevant sense. If you want to think of all of the details of the skater-beagle picture as expression, that expression does not magically appear from the ether, it comes from the latent space implied by the training data as processed by the model during training. In some ways it’s fair to say it comes from the collective efforts of all of the authors of all of the works in the training data. But not in the sense of a simple remix or cut-and-paste job.
Not from nothing, but not a remix
Generative AI systems come in different kinds, GANs, diffusion models, multimodal large language models, and more. The common feature of all these systems is that they trained on a large volume of prior works, and through a mathematical process, they are able to produce new works, often with very limited additional human input. But that doesn’t mean Skater Beagle belongs to the millions (10s of millions, 100s of millions?) of authors of the works in the training data. This beagle is not simple remix or collage. Although generative AI models are data dependent, they don’t just remix the training data, they produce genuinely new outputs.
AI Creativity comes from latent space
Generative AI models learn an abstract model of the training data, a model that is in many ways more than the sum of its parts. When you prompt a generative AI model, you are not querying a database, you are navigating a latent space implied by the training data.
What do I mean by “navigating a latent space implied by the training data”? Let’s start with a simple analogy. When you fit a linear regression to a handful of data points you generate a line of best fit implied by the data as seen in the figure below. Think of the dots as the training data and the line as the model implied by the training data.
Illustration of fitting a line to scattered data. Two side-by-side scatter plots on a beige background. Left: Five orange data points scattered in an upward trend without a line. Right: The same points with a straight diagonal line drawn from bottom left to top right, representing a best-fit line. Both axes are labeled X and Y, ranging from 0 to 10.
The line illustrated above is simple, it is in fact an equation that you can use to answer the question, “if y is 6, what is x?” The point 6,6 is not in the data, but it is implied by the data and the model we used to fit the data. When you plug y=6 into the model you are navigating to a point implied by the data that tells you x=6, as seen in the figure below. That is what I mean by navigating the latent space.
Illustration of navigating to point implied by linear regression. A scatter plot with five orange data points, a green dashed diagonal line representing a trend, and red dashed lines intersecting at the point (6,6). Axes are labeled X and Y, ranging from 0 to 10, on a beige background.
But of course, if we used a different model, the data would imply a slightly different latent space, as illustrated in the figure below. Here the model is not linear its quadratic and just changing that starting assumption gives us a different line of best fit.
Illustration of fitting a different model to the data. A scatter plot with five orange data points on a textured blue-and-beige background. A green dashed curve rises steeply before leveling off, intersecting red dashed lines at the point (4,6). Axes are labeled X and Y, ranging from 0 to 10.
The difference between the straight line and the curved line here is analogous to the difference between different LLMs. Obviously, generative AI models are much more complicated than a two-dimensional regression model. Generative AI models have thousands of dimensions, and so they constructs a much richer latent space, but the analogy holds. Any number of dimensions above 3 is hard to conceptualize, don’t bother trying to imagine thousands of dimensions, your brain might melt.
Does Latent Space Solve the Creativity Puzzle?
Understanding latent space helps resolve the creativity puzzle. The image of Skater Beagle looks original because the model has generated a point in a vast space of possible images implied by its training data — not because a human author made free and creative choices about the details. The model navigates to a statistically plausible combination of features, but no person decides where the beagle’s ears should fly, how steep the hill should be, or what the sunset should look like. Understanding latent space helps explain why the output of a model can feel creative but still lacks the human authorship copyright law requires.
But wait, …
But in practice, it seems like almost any photo you send to the Copyright Office will be deemed creative enough to meet the requirements for registration. If I can get copyright for just pointing my iPhone at a beagle on skateboard and pressing a button, why can’t I get copyright in an image of a beagle on skateboard that I created using generative AI?
This seems inconsistent at first blush, but only because the question overlooks the difference between the “thin” copyright that attaches to photos based in reality and the thick copyright that typically attaches to illustrations drawn from imagination.
Small jumps versus big jumps
When you take a photo, you are making a copyrightable selection and arrangement from reality. You get no rights in the underlying reality, just a specific photographic representation therein. In most copyrightable photos there is only a small jump between idea and expression and so the resulting copyright is limited to that jump. Taking a photo does not give you exclusive rights on the underlying ideas, subjects, locations, etc.
There are two critical differences between the typical iPhone snap and an image generated with AI.
The first difference is that there is a much more significant jump between idea and expression in the transition from text prompt to final image, compared to the jump from a real life scene to photo capturing the scene. The second difference is that in photography, a human still makes some minimal creative decisions (framing, timing, composition) that manifest in the look of the resulting image. The human makes the jump, even if it’s only a small jump. In AI generation, the algorithm fills in the details that transform the prompt into a specific visual expression. The AI makes the jump between your idea for a photo and the details of the photo itself.
There is no copyright in the Skater Beagle image Gemini made for me. The work of bridging the gap from abstract concept to concrete image was done entirely by algorithms trained on trillions of words and millions photos. The details that we might think of as expression in the image didn’t come from nothing, they didn’t come directly from any particular photo featuring low angle action shots, beagles, dogs with ears flying, skateboard riders, steep hills, urban settings, “golden hour” sunsets, city skylines, etc. The details that we might think of as expression don’t reflect the free and creative choices of any human mind. They are details implied by a model trained on millions of photos, but those details don’t come from those photos either. The come from the universe of possibilities those photos imply, they come from latent space.
Skater Beagle is an extreme example
Generative AI lets us navigate a latent space implied by works too numerous to count so that we can create genuinely new digital artifacts. I began this essay with the promise that understanding this would shed light on how copyright applies to AI-generated works, but Skater Beagle is an extreme example drawn from one end of the continuum. Understanding why Skater Beagle is not a copy of beagles in the training data, but is also not my creative expression tells us that the Copyright Office is right to deny copyright to some generative AI creations. But it does not tell us at what point a user would cross the line from commissioning editor to guiding hand or creative mastermind. It’s hard to imagine crossing that line with a single text prompt, but it’s easy to see how you would leap over it in an iterative process as in A Single Piece of American Cheese. Iterative interactive use of generative AI will often be an act of authorship, so long as it is more than just choosing a winner in a beauty pageant of AI creations.
[This essay was adapted from Matthew Sag, Copyright Law in the Age of AI (2025)]
In a closely watched decision revising a previous summary judgment, Judge Stephanos Bibas, a Third Circuit judge sitting by designation, sided largely with Thomson Reuters in its copyright dispute against ROSS Intelligence. The ruling granted partial summary judgment on direct copyright infringement claims while dismissing ROSS’s argument that its use of Thomson Reuters’ content qualified as fair use.
With Ross Intelligencenow bankrupt and the technology at issue a decidedly niche application, attention is shifting to the broader implications for AI training and the use of copyrighted materials—particularly in the realm of generative AI. Earlier, Judge Bibas had refused to grant summary judgment on fair use, insisting the matter be put before a jury. However, upon further reflection, he reversed course, ultimately rejecting the defendant’s fair use defense outright.
Background
Thomson Reuters, the owner of Westlaw, accused the AI-driven legal research firm ROSS of copyright infringement, alleging that it had improperly used legal summaries—so-called Bulk Memos—derived from Westlaw’s editorial materials, particularly its headnotes, to train its technology. Thomson Reuters had refused to license its content to ROSS, a rival developing an AI-powered legal research tool requiring a database of legal questions and answers for training. To obtain the necessary data, ROSS partnered with LegalEase, which compiled and sold approximately 25,000 Bulk Memos—summaries created by lawyers referencing Westlaw headnotes. Whether the Bulk Memos involved verbatim copying or otherwise infringing copying was an issue in the case that ultimately went against ROSS. Upon discovering that ROSS had used content derived from these headnotes, Thomson Reuters filed a copyright infringement lawsuit. The summary judgment pertains only to a subset of the contested headnotes, leaving broader legal questions unresolved.
The court ruled against ROSS, determining that it had copied 2,243 headnotes and dismissing its various legal defenses, including claims of innocent infringement, copyright misuse, and the merger doctrine.
Ross’s use was not transformative
Judge Bibas ruled that ROSS’s use of Thomson Reuters’ material was commercial and non-transformative, a conclusion that weighed heavily in the publisher’s favor. According to the court, the use did not qualify as transformative because it lacked a distinct purpose or character from Thomson Reuters’ original work.
The court’s conclusion that Ross’s use was not transformative is puzzling, especially given its acknowledgment—while discussing the third fair use factor—that the output of Ross’s system did not replicate Westlaw’s copyrighted headnotes but rather produced uncopyrighted judicial opinions.
The court did distinguish two significant cases, Sega Enterprises Ltd. v. Accolade, Inc. and Sony Computer Entertainment, Inc. v. Connectix Corp. but failed to consider cases like iParadigms, HathiTrust and Google Books. Even the way the court dealt with the reverse engineering cases is a bit suspect. The court sets them aside for two reasons, first because those cases involved copying software code, and second, that such copying was “necessary for competitors to innovate.” To be sure, Oracle v. Google suggests that cases involving software may merit special treatment, but it is not clear why the software context should make a difference here. Judge Bibas’s invocation of necessity is undercooked as well. Whether an act of copying is “necessary” is inextricably tied to the level of generality at which you ask the question. In Oracle v. Google, Google’s replication of Java APIs was essential for compatibility with existing Java programmers, but whether that compatibility was a necessity or luxury depends on the level of generality at which you pose the question. After all, other smartphones ran without making life easy for Java programmers.
Not generative AI, but why?
The judge took care to distinguish this case from generative AI, yet the distinction remains murky. The court stated: “Ross was using Thomson Reuters’s headnotes as AI data to create a legal research tool to compete with Westlaw. It is undisputed that Ross’s AI is not generative AI (AI that writes new content itself).” And later that “Because the AI landscape is changing rapidly, I note for readers that only non-generative AI is before me today.”
But what, exactly, sets this apart from generative AI? More broadly, how does this differ from other cases where nonexpressive uses have been deemed fair use? The opinion offers little guidance. It fails to engage with seemingly comparable precedents, such as plagiarism detection tools, library digitization for text analysis and digital humanities research, or the creation of a book search engine—cases where courts have found fair use.
The closest we get to an explanation of why Ross’s use of the Westlaw headnotes is different to the intermediate copying iParadigms, HathiTrust and GoogleBooks is that Ross merely retrieves and presents judicial opinions in response to user queries. This process, the court observed, closely parallels Westlaw’s own practice of using headnotes and key numbers to identify relevant cases. Consequently, the court concluded that Ross’s use was not transformative, as it primarily served to facilitate the development of a competing legal research tool rather than to add new expression or meaning to the copied material.
Market effect
The court determined that ROSS’s actions impaired Thomson Reuters’ market for legal AI training data, and in its reasoning, the fourth fair use factor carried substantial weight. Without qualification, the opinion echoes Harper & Row’s assertion that the fourth factor “is undoubtedly the single most important element of fair use.” This is problematic. Asserting the absolute primacy of the Fourth factor is obviously in error in light of Campbell, as well as the Court’s more recent decisions in Google v. Oracle and Andy Warhol Foundation. The Court’s contemporary approach to fair use eschews rigid hierarchies among the statutory factors.
That said, the judge’s finding in relation to the fourth factor may not be entirely unreasonable in this case: Ross explicitly intended to compete with Westlaw by creating a viable market alternative. For the court the key fact was that Ross “meant to compete with Westlaw by developing a market substitute.” “And it does not matter whether Thomson Reuters has used the data to train its own legal search tools; the effect on a potential market for AI training data is enough.”
Implications
One district court opinion that barely engages with the relevant caselaw will not change U.S. fair use law overnight, but it will certainly be welcome news for the plaintiffs in the more than 30 ongoing AI copyright cases currently being litigated.
I think what is really going on in this decision is that the judge has confused the first factor with the fourth factor. There is no obvious way to distinguish training on the question and answer memos to develop a model that directly links user questions to the relevant case law from cases involving search engines and plagiarism detection software. The real distinction, if there is one, is that ROSS used Westlaw’s product to create a directly competing product.
Looking at the case this way, the decision might actually be good for the generative AI defendants, in cases like NYT v OpenAI, because there isn’t the same direct competition.
* This is my first quick take on the decision just hours after it was handed down.
* Citation: Thomson Reuters Enter. Ctr. GmbH v. ROSS Intelligence Inc., No. 1:20-cv-613-SB (D. Del. Feb. 11, 2025)
TIM LEE (@binarybits) and JAMES GRIMMELMANN have written an insightful article on “Why The New York Times might win its copyright lawsuit against OpenAI” in Ars Technica and on Tim’s newsletter (https://www.understandingai.org/p/the-ai-community-needs-to-take-copyright).
Quite a few people emailed me asking for my thoughts, so here they are. This is a rough first take that began as a tweet before I realized it was too long.
Yes, we should take the NYT suit seriously
It’s hard to disagree with the bottom-line that copyright poses a significant challenge to copy-reliant AI, just as it has done to previous generations of copy-reliant technologies (reverse engineering, plagiarism detection, search engine indexing, text data mining for statistical analysis of literature, text data mining for book search).
One important insight offered by Tim and James is that building a useful technology that is consistent with some people’s rough sense of fairness, like MP3.com, is no guarantee of fair use. People loved Napster and probably would have loved MP3.com, but these services were essentially jukeboxes competing with record companies’ own distribution models for the exact same content. We could add ReDigi to this list, too. Unlike the copy-reliant technologies listed above, Napster, MP3.com, and ReDigi fell foul of copyright law because they made expressive uses of other people’s expressive works.
Tim and James make another important point, that academic researchers and Silicon Valley types might have got the wrong idea about copyright. Certainly, prior to November 2022 you almost never saw any mention of copyright in papers announcing new breakthroughs in text data mining, machine learning, or generative AI. This is why I wrote “Copyright Safety for Generative AI” (Houston Law Review 2023).
Tim and James’ third insight is that some conduct might be fair use in a small noncommercial scale but not fair use on a large commercial scale. This is right sometimes, but in fact, a lot of fair use scales up quite nicely. 2 Live crew sold millions of copies of their fair use parody of Roy Orbison’s Pretty Woman, and of course, some of the key non-expressive use precedents were all about different versions of text data mining at scale: iParadigms (commercial plagiarism detection), HathiTrust (text mining for statistical analysis of the literature, including machine learning), Google Books (commercial book search).
In a nutshell, that argument is that a technical process that creates some effectively invisible copies along way but ultimately produces only uncopyrightable facts, abstractions, associations, and styles should be fair use because it does not interfere with the author’s right to communicate her original expression to the public.
I also agree that this argument begins to unravel if generative AI models are in fact memorizing and delivering the underlying original expression from the training data. I don’t think we know enough about the facts to say whether individual examples of memorization are just an obscure bug or endemic problem.
The NYT v. OpenAI litigation will shed some light on this but there is a lot of discovery still to come. My gut feeling is that the NYT’s superficially compelling examples of memorization are actually examples of GPT-4 working as an agent to retrieve information from the Internet. This is still a copyright problem, but it’s a very small, easily fixed, copyright problem, not an existential threat to text data mining research, machine learning, and generative AI.
If the GPT series models are really memorizing and regurgitating vast swaths of NYT content, that is a problem for OpenAI. If pervasive memorization is unavoidable in LLMs, that would be a problem for the entire generative AI industry, but I very much doubt the premise. Avoiding memorization (or reducing to trivial levels) is a hard technical problem in LLMs, but not an impossible one.
Avoiding memorization in image models is more difficult because of the “Snoopy Problem.” Tim and James call this the “Italian plumber problem,” but I named it first and I like Snoopy better.
The Snoopy Problem is that the more abstractly a copyrighted work is protected, the more likely it is that a generative AI model will “copy” it. Text-to-image models are prone to produce potentially infringing works when the same text descriptions are paired with relatively simple images that vary only slightly.
Generative AI models are especially likely to generate images that would infringe on copyrightable characters because characters like Snoopy appear often enough in the training data that the models learn the consistent traits and attributes associated with those names. Deduplication won’t solve this problem because the output can still infringe without closely resembling any particular image from the training data. Some people think this is really a problem with copyright being too loose with characters and morphing into trademark law. Maybe, but I don’t see that changing.
How serious is the Snoopy Problem? Tim and James frame the problem as though they innocently requested a combination of [Nationality] + [Occupation] + “from a video game” and just happened stumble upon repeated images of the world most famous Italian plumber, Mario from Mario Kart.
But of course, a random assortment of “Japanese software developers” “German fashion designers” “Australian novelists” “Kenyan cyclists” “Turkish archaeologists” and a “New Zealand plumber” don’t reveal any such problem. The problem is specific to Mario because he dominates representations of Italian plumbers from video games in the training data.
The Snoopy Problem presents a genuine difficulty for video, image, and multimodal generative AI, but it’s far from an existential threat. Partly, because the class of potential plaintiffs is significantly smaller. There are a lot fewer owners of visual copyrightable characters than there are just plain old copyright owners. And partly because the problem can be addressed in training, by monitoring prompts, or by filtering outputs.
Tim and James’s final point of concern is that the prospect of licensing markets for training data will undermine the case for fair use. Companies building AI models rely on the fact that they are simply scraping training data from the “open Internet,” the argument becomes more persuasive when these companies are more careful to avoid scraping content from sites where they are not welcome.
Respecting existing robots.txt signals and helping to develop more effective ones in the future will facilitate robust licensing markets for entities like the New York Times and the Associated Press.
I don’t think that OpenAI will need to sign a 100 million licensing deals before training its next model. Courts have already considered and rejected the circular argument that copyright owners must be given the right to charge for non-expressive uses to avoid the harm of not being able to charge for non-expressive uses. This specific argument was raised by the Authors Guild in HathiTrust and Google Books and squarely rejected in both.
Tim and James and their note of caution with a note of realism: judges will be reluctant to shut down an innovative and useful service with tens of millions of uses. We saw a similar dynamic when the US Supreme Court held that time shift in using videocassette recorders was fair use.
But there is another element of realism to add. If the US courts reject the idea that non-expressive uses should be fair use, most AI companies will simply move their scraping and training operations overseas to places like Japan, Israel, Singapore, and even the European Union. As long as the models don’t memorize the training data, they can then be hosted in the US without fear of copyright liability.
Tim and James are two of the smartest most insightful people writing about copyright and AI at the moment. The AI community should take them seriously, they should take copyright seriously, but they should not see Snoopy (or the Italian Plumber) as an existential threat.
PS: Updated to correct typos helpfully identified by ChatGPT.
I had the great honor of testifying to the US Senate Judiciary Subcommittee on Intellectual Property in relation to Artificial Intelligence Copyright on Wednesday, July 12th, 2023.
In my testimony I explained that although we are still a long way from the science fiction version of artificial general intelligence that thinks, feels, and refuses to “open the pod bay doors”, recent advances in machine learning AI raise significant issues for copyright law.
I explained why copyright law does not, and should not, recognize computer systems as authors and why training generative AI on copyrighted works is usually fair use because it falls into the category of non-expressive.
For more on copyright and generative AI, read Matthew Sag, Copyright Safety for Generative AI (Houston Law Review, Forthcoming) (https://ssrn.com/abstract=4438593)
This morning I presented Lessons for Empirical Studies of Copyright Litigation … A Case Study of Copyright Injunctions, CREATe@10 – Copyright Evidence: Synthesis and Futures, University of Glasgow October 17, 2022.
The presentation is based on Matthew Sag and Pamela Samuelson, Discovering eBay’s Impact on Copyright Injunctions Through Empirical Evidence forthcoming in the William & Mary Law Review 2023 ( https://ssrn.com/abstract=3898460)
In 2018 Jake Haskell and I published an article called “Defense Against the Dark Arts of Copyright Trolling” in the Iowa Law Review. The article focused on BitTorrent related litigation that accounted for roughly half of all copyright cases filed in the United States at the time. As we described in the article, in the typical BitTorrent case,
“the plaintiff’s claims of infringement rely on a poorly substantiated form pleading and are targeted indiscriminately at non-infringers as well as infringers. This practice is a subset of the broader problem of opportunistic litigation, but it persists due to certain unique features of copyright law and the technical complexity of Internet technology. The plaintiffs bringing these cases target hundreds or thousands of defendants nationwide and seek quick settlements priced just low enough that it is less expensive for the defendant to pay rather than to defend the claim, regardless of the claim’s merits.”
Given my interest in this topic, I get a lot of emails and phone calls asking about another high volume copyright plaintiff’s lawyer, Higbee & Associates.
I am writing this post so that people have something to go on without waiting for a response from me (which can often take a while, sorry).
Is Higbee & Associates a copyright troll?
Some people call Higbee & Associates (or the clients they represent) copyright trolls. Certainly, they seem more interested in monetizing infringement than simply stopping it. After all, they could use DMCA takedowns in most of these cases and it would be just as effective.
Fair point, but even if they are looking primarily to the rewards of the courthouse rather than the market place, they would no doubt respond that litigation is required to make people understand that photography is not free for the taking. The performing rights organization, ASCAP, files a lot of lawsuits for exactly this reason.
So, in terms of motive, the copyright troll label might not be a great fit, what about methods?
Higbee & Associates are a little different to the copyright trolls Jake and I discussed in Defense Against the Dark Arts of Copyright Trolling. As far as I know, they don’t make a habit of go after obvious non-infringers. Although they don’t seem to recognize many potential fair use arguments either. Also they don’t appear to rely on dodgy technology or bogus experts to make their case — a feature that is endemic of in the BitTorrent litigation.
However, Higbee does seem to send a lot of out letters of demand without much underlying depth. These letters often fail to provide a copyright registration. They often claim to represent a copyright owner who is not the author without evidencing any assignment of rights. You don’t need a registration to make a demand, but you absolutely need one to file a claim in federal court and to get statutory damages. So that seems a bit odd. Not connecting the dots between the person who took the photo and the client they say they represent is also a bit odd.
Moreover, the copyright troll label certainly fits with the sense of being ambushed that many defendants experience. I hear from a lot of these recipients. Receiving a letter from Higbee & Associates feels like an ambush because so many people don’t really understand how copyright works. It also feels like an ambush because the settlement amounts Higbee & Associates demand in a typical letter don’t seem to reflect the value of the underlying work.
Instead of demanding some multiple of the standard license fee for the work in question, Higbee will demand a settlement amount based on what they could get in court under copyright’s rather imprecise statutory damages rules. Which makes their oft noted failure to provide proof of registration even more interesting.
Assuming the work was registered at the relevant time, the prevailing plaintiff in copyright litigation can get statutory damages in the range of $750 to $150,000 per work infringed, regardless of the amount of actual damage. This is a pretty terrifying prospect for most accused infringers. But it gets worse. The real kicker is that if you fight the infringement accusation and lose, you risk just adding to your pain because if they are the prevailing party, the plaintiff has a good chance of getting their attorneys fees as well as statutory damages!
So, what to do?
Step one: figure out whether you have a good story to tell on the merits
You might have a case on the merits. Here are some examples:
you paid for a license to use the photo (or you thought you did);
you made fair use of the photo by using it as the foundation for commentary, parody or criticism (if you made changes to the photo that reinforce this transformative purpose, the merits of your fair use defense will be even clearer);
the party Higbee & Associates represents does not actually own the photo;
the photo was not registered with the U.S. Copyright Office before you started using it;
you didn’t post the photo, one of your users did it. This gets complicated. You might be covered by the DMCA, but only if you jump through the right hoops including registering an agent with the Copyright Office every three years. If you are not covered by the DMCA, you still might not be responsible for infringing acts by your users, it depends on a number of issues too detailed to summarized here.
Arguments on the merits that won’t help:
you didn’t post the photo, one of your employees did — sorry, you are responsible for your employees in a case like this.
you didn’t know the photo was copyrighted — this doesn’t help as much as you might think.
you thought that photos on the Internet were in the public domain — they aren’t.
you were not making a profit on your website — this doesn’t help as much as you might think.
Step two: ask for more information
Request copy of copyright registration, the deposit material that accompanied application, and documents sufficient to show Higbee is authorized by copyright owner to act as agent.
Explain that any settlement you agree to will have to contain proposed settlement a warranty that Higbee is the duly authorized agent of the copyright owner, that their client owns the copyright asserted, and that such copyright is valid. If they won’t do this, why not?
Step two: If you realize now that you might have been infringing the photographer’s copyright
Take down the photo and audit the rest of the images on your website.
If the work was unregistered. Do what your conscience tells you is right. The reality is that it is not worthwhile for them to take this case to court unless they can show actual damages of more than a few hundred dollars.
If the work was registered and they actually represent the copyright owner, make a reasonable settlement offer.
What’s a reasonable offer? Based on the cases I have seen, probably, $1000 and go up to $1250 but your individual facts may vary.
If the plaintiff won’t settle, don’t contest every point in the litigation. Instead try to keep everyone’s costs as low as possible; make an “offer of judgment” and hope that you get a reasonable judge who can see that there is no virtue in awarding more than $750 minimum in statutory damages. If you make this strategy clear to them, they should agree to a reasonable offer and move on to their next target.
Do you need a lawyer?
Probably, yes.
You could try to settle (or tell them to take a hike) by yourself, but without a lawyer representing you it’s hard to know how to respond to the arguments that the Higbee are going to throw back.
If you need a referral to a lawyer with experience in these matters, I can try to provide one. I don’t handle these cases myself. You should also know that because I am not your lawyer, any emails you send me are not going to be protected by attorney client privilege.