An Open Letter to Chicago’s Department of Business Affairs & Consumer Protection (BACP)

Dear Sirs,

I write to express my profound dismay that Chicago is considering regulating Uber out of existence. Uber is an middleman that connects limo drivers to customers in a way that is convenient, flexible and safe. The Uber rating system keeps limo drivers on their best behavior: this benefits riders, but also the wider community because safe driving saves lives!

Chicago taxis are a disgrace to our great city. I realize that the city tries to monitor drivers but it does not have the resources to pursue anything but the gravest complaints. I walk around the city every day, either with my dog or to and from my office. Almost every day without exception I see taxis driving unsafely  — running lights, not yielding to pedestrians before turning, straddling lanes, changing lanes without indicating, stopping abruptly, etc.

The mission of the BACP is to ensures a fair and vibrant market place for both businesses and consumers. Your mission is not to simply protect incumbent taxi companies from competition. If Uber is a threat, it is a threat to raise standards! One of the great things about Uber is that it empowers the passenger to monitor the driver’s performance. Drivers know this and in my experience, they lift their game accordingly. Uber is efficient. Uber is good for drivers. Uber empowers consumers. Uber saves lives. Please don’t make the mistake of protecting the status quo at the expense of consumers and competition.

Please Remove the No Measured Rates Provision.

Sincerely,

Matthew Sag
(in my personal capacity)

Associate Professor, Loyola University Chicago School of Law
Associate Director for Intellectual Property of the Institute for Consumer Antitrust Studies
Download my research at http://ssrn.com/author=461043
Follow my tweets at http://twitter.com/matthewsag
My website is www.matthewsag.com

HathiTrust and the Future of Orphan Works

The U.S. Copyright Office is taking another look at the problem of orphan works under U.S. copyright law.

As the Copyright Office notice explains that the Copyright Office is “interested in what has changed in the legal and business environments during the past few years that might be relevant to a resolution of the problem and what additional legislative, regulatory, or voluntary solutions deserve deliberation.” Comments are due by 5:00 p.m. EST on January 4, 2013. Reply comments are due by 5:00 p.m. EST on February 4, 2013.

Assuming it is not reversed by the Second Circuit, does the HathiTrust win on October 10, 2012 take some of the urgency out of the orphan works issue? After all, digitization for non-expressive use such as text mining and building a search engine has now been confirmed as fair use. In addition, digitization in the service of expanding access for the print-disabled is also now clearly fair use.

Or, does the HathiTrust win simply set the stage for addressing general purpose expressive access to orphan works? The district court in HathiTrust did not reach the merits of the copyright claims with respect to the universities’ Orphan Works Project and gave very little signal how it would decide such an issue.

HathiTrust Wins on Fair Use, and just about everything else

Landmark Fair Use Win

Yesterday, District Judge Harold Baer, Jr., handed down his decision in Authors Guild v. HathiTrust, a case that spins out of the long-running Google Books dispute. The decision is a landmark win for the HathiTrust, the University defendants, people with print-disabilities, Google, the Digital Humanities and, I would argue, for humanity in general.

Essential Background

The HathiTrust is a digital repository of millions scanned university library books that became available to various universities by virtue of the Google Books project.  About 3/4 of the books are still in copyright. In 2011 HathiTrust announced plans to embark on an innovative orphan works program (OWP), but dropped (or at least shelved) the plan soon after in light of criticism as to its implementation. Spurred into action by the OWP, in September 2011 the Authors Guild filed a copyright lawsuit against HathiTrust, five universities, and multiple university officials.

The Authors Guild suit alleged that library digitization for any purpose amounts to copyright infringement. The purposes specifically under attack in this case were (i) preservation; (ii) to enable non-expressive use such as conducting word searches; and (iii) to facilitating access by persons who are blind or visually impaired.

There is a key fact in this case that media reports will probably get wrong. This is not about scanning books to make extra copies for the public at large. As the Court explained, “No actual text from the book is revealed except to print-disabled library patrons at [University of Michigan].” Authors Guild v. HathiTrust, p 16. This case was about library digitization for three specific purposes, preservation, disabled access and non-expressive uses such as text searching and computational analysis.

The Score Card

Here is quick and dirty summary of the key copyright issues:

  • Digitization to provide access for the print-disabled held to be transformative use and, on balance, fair use.
  • Digitization to provide for print-disabled students held to be (i) an obligation of universities under the ADA, (ii) fair use under section 107 of the Copyright Act and (iii) enabled by section 121 of the Copyright Act.
  • Section 108 the Copyright Act was held to expand the rights of libraries, not limit the scope of their fair use rights in any way, shape or form. Given the text says “Nothing in this section . . . in any way affects the right of fair use as provided by section 107” any ruling to the contrary would have been pretty shocking.
  • Digitization to create a search index held to be a transformative use, and, on balance, fair use.
  • Alleged security risks created by library digitization — dismissed as speculative and unproven. The judge noted the strong evidence to the contrary. It is still an open question whether the risk of subsequent illegal act by a third party could ever render an initial lawful copy not fair use. The whole notion strikes me as rather odd.
  • The market effect of library digitization — the court found there was none to speak of in this case. The court rejected the CCC’s magic toll-booth arguments — i.e., there were some wild assertions about future licensing revenue that the court rejected as “conjecture”.
  • The court also notes that a copyright holder cannot preempt a transformative market merely by offering to license it.
  • The market effect of enabling print-disabled access to library books — the court found there was no market for this under-served group, nor was one likely to develop.

Did the authors Guild win anything?
Not really, but two issues could have been even worse.

  • The court held that the issue of the Orphan Works Program was not ripe for adjudication. This was inevitable in my opinion, but the judge could have added unfavorable dicta indicating that the AG had no case here either. Wisely, the judge said only what needed to be said.
  • On the issue of library digitization for the purpose of preservation, the court found that the argument that “preservation on its own is transformative is not strong.”

The Digital Humanities

The court appeared to accept the arguments in the Digital Humanities amicus brief, written by Matthew Jockers, Jason Schultz and myself with the assistance of many others. The brief extended arguments I made in Orphan Works as Grist for the Data Mill, 27 Berkeley Technology Law Journal (forthcoming) and Copyright and Copy-Reliant Technology 103 Northwestern University Law Review 1607–1682 (2009).

Following Second Circuit precedent, the court explained that

“a transformative use may be one that actually changes the original work. However, a transformative use can also be one that serves an entirely different purpose.”

The court concluded that

“The use to which the works in the HDL are put is transformative because the copies serve an entirely different purpose than the original works: the purpose is superior search capabilities rather than actual access to copyrighted material. The search capabilities of the HDL have already given rise to new methods of academic inquiry such as text mining.”

The court even cites an illustration from our brief!

“Mass digitization allows new areas of non-expressive computational and statistical research, … One example of text mining is research that compares the frequency with which authors used “is” to refer to the United States rather than “are” over time. See Digital Humanities Amicus Br. 7 (“[I]t was only in the latter half of the Nineteenth Century that the conception of the United States as a single, indivisible entity was reflected in the way a majority of writers referred to the nation.”).”

Google Ngram Visualization Comparing Frequency of “The United States is” to “The United States are”

You can reconstruct the figure on Google Ngram yourself!

The court also cites our brief for the proposition that the use of metadata and text mining “could actually enhance the market for the underlying work, by causing researchers to revisit the original work and reexamine it in more detail”

Non-expressive use is fair use

The court did exactly what the amicus briefs urged it to do. As Matthew Jockers, Jason Schultz and I argued in our recent article in Nature last week (Digital Archives: Don’t Let Copyright Block Data Mining, 490 Nature 29-30 (October 4, 2012))

“It is time for the US courts to recognize explicitly that, in the digital age, copying books for non-expressive purposes is not infringement.”

Courts have already applied this logic in internet search engine cases and in a case involving plagiarism detection software. As we hoped, Judge Baer’s ruling demonstrates that digitization for text mining and other forms of computational analysis is, unequivocally, fair use.

“Plaintiffs assert that the decisions in Perfect 10 and Arriba Soft are distinguishable because in those cases the works were already available on the internet, … I fail to see why that is a difference that makes a difference.”

This was not a close case

“Although I recognize that the facts here may on some levels be without precedent, I am convinced that they fall safely within the protection of fair use such that there is no genuine issue of material fact. I cannot imagine a definition of fair use that would not encompass the transformative uses made by Defendants’ MDP and would require that I terminate this invaluable contribution to the progress of science and cultivation of the arts that at the same time effectuates the ideals espoused by the ADA.”

 

A significant win for the National Federation for the Blind

My focus in this case has always been on the technological side, that is my academic interest. However,the most important issue in this case is not about search engines, the digital humanities or non-expressive use, it is about reading, humanity and expressive use. I am of course referring to those aspects of the decision relating to fair use and persons with disabilities.

“[m]aking a copy of a copyrighted work for the convenience of a blind person is expressly identified by the House Committee Report as an example of a fair use, with no suggestion that anything more than a purpose to entertain or to inform need motivate the copying.”

As Kenny Crews summarizes:

“The opinion provides a strong opinion about fair use as applied to serving persons with disabilities, especially when an educational institution is mandated to serve needs under the Americans With Disabilities Act.  The court goes further and resolves a long-time quandary that arose under Section 121 of the Copyright Act.  That statute permits an “authorized entity” to make formats of certain works available to persons who are visually impaired.  An “authorized entity” is one that has a “primary mission” to serve those needs.  Libraries and universities have many functions, so is that service a “primary mission”?  The court said yes.”

 

Some useful links:

Google Book Search: Digital Humanities still needs answers

Google has settled with the publishers, but not the Authors Guild. This is good news for the Digital Humanities because it means that we may still get a substantive ruling on the big fair use question underlying the entire litigation.

Human life is short, none of us can hope to read more than a smattering of the literary record, but fortunately massive digitization efforts like those undertaken by Google allow scholars to apply large-N computerized methods to millions of works. Computational and statistical analysis of literature will be a big part of humanities research for years to come. However, legal actions like those of the Authors Guild could bar scholars from studying as much as two-thirds of the literary record.

In a comment published in Nature today [paywall] [Nature Vol. 490, pages 29–30 (04 October 2012) doi:10.1038/490029a], Matthew Jockers (an English professor), Jason Schultz (a law professor) and myself (also a law professor) explain why the the Association for Computers and the Humanities and a large group of scholars chose to file an amicus curiae brief on behalf of the digital humanities in the Authors Guild v. Google and Authors Guild v. HathiTrust cases.

In the brief we explain why U.S. courts should recognize that copying books for non-expressive purposes is not infringement.

My view is that the settlement between Google and the publishers makes such a ruling more likely because it provides further evidence that the ability to make non-expressive uses of copyrighted books works hand in hand with the commercialization of expressive uses which is what copyright law is all about.

For more on this topic, see https://matthewsag.com/projects/google-book-copyright-the-digital-humanities/

 

 

Google Book, Settled and Unsettled.

According to Reuters, Google and the Association of American Publishers (AAP) have reached a settlement in the long-running Google Book Search Litigation. Details remain sketchy. The settlement does not affect Google’s current litigation with the Authors Guild.

Nature has just published a comment piece by Matthew Jockers, Jason Schultz myself explaining why humanities scholars filed amicus briefs in the Authors Guild v. Google and Authors Guild v. HathiTrust lawsuits. These suits are still very much alive and it is not clear that the Authors Guild has the same incentives to settle as the AAP did.

Additional Links:

  • Joint Press Release
  • Techdirt comment that this is exactly what Google offered 7 years ago. “Basically, this settlement is AAP admitting that the entire lawsuit was a waste of time and money.”
  • James Grimmelman’s summary. “the settlement does not change the situation on the ground in any significant way”
  • Andrew Albanese, Publishers Weekly quotes AAP president Tom Allen saying “[we] out an arrangement that doesn’t resolve the legal issues. We agree to disagree on those, but as a practical matter, it does resolve our differences with Google.”

 

The origins of fair use

I have just added a page to this website devoted to the history of fair use. As I note in my article The Pre-History of Fair Use 76 Brooklyn Law Review 1371-1412 (2011), fair use does not begin with early American cases such as Folsom v. Marsh in 1841, as many accounts assume. The fair use doctrine began over a century earlier when English courts were considering issues of republishing and abridgment — the remix culture of the 1700′s.

My main points are

  • Copyright has always involved some balancing between authors rights and users rights. Fair use is part of the legal tradition of every country that traces its copyright law back to the Statute of Anne.
  • Fair use did not take away from authors rights, it made it possible for the courts to take a purposive reading of the copyright act that actually expanded authors rights.

Global Research Network on Copyright Flexibilities in National Legal Reform Meeting in DC

I am in DC today at the Global Research Network on Copyright Flexibilities in National Legal Reform Meeting.

Copyright reform is under active discussion at the national level in numerous countries. The goal of the Global Research Network on Copyright Flexibilities in National Legal Reform is to produce draft language for a flexible limitation and exception that could be included in national legislation. We expect to offer this language, which may include more than one model provision, to legislators and civil society advocates in countries contemplating copyright reform. Additionally, we aim to develop an online “tool kit” to assist these deliberations.

Brands, Competition, and the Law

On October 19, the Institute for Consumer Antitrust Studies is co-hosting a conference on Brands, Competition, and the Law along with University College London.  This is the follow-up to a very successful program on the same theme in London in December 2011.  A book with selected papers and comments from these conferences will be forthcoming.

We have assembled an all-star lineup of economists, marketing and branding professionals, as well as antitrust and IP lawyers and professors to try to reach a common understanding of the meaning and impact of brands in the market place and the appropriate legal regime. The full details and registration information for the conference are available at http://www.luc.edu/law/academics/special/center/antitrust/brands_competition_law.html.

The speakers include: Deven Desai, Kirsten Edwards-Warren, Phil Evans, Warren Grimes, Greg Gundlach, James Langenfeld, Ioannis Lianos, Deborah Majoras, Mark McKenna, John D. Mittelstaedt, John Noble, Barak Orbach, Joan Phillips, Matthew Sag, Eliot Schreiber, and Spencer Weber Waller.

Our Robot Overlords

There is a great story today on io9.com illustrating just why automatic copyright filtering can never be a complete solution to online copyright issues. In short,

Dumb robots, programmed to kill any broadcast containing copyrighted material, had destroyed the only live broadcast of the Hugo Awards.

Apparently, a licensed clip from Dr. Who (which would have been fair use even if it had not been licensed) triggered the filtering software and exterminated the webcast. Companies like Ustream are of course free to implement whatever dumb software they like, but if filtering becomes the norm we will all be subject to prior restraint by mindless automatons. I, for one, do not welcome our new robot overlords.