Trove and the Australian National Library’s risk management approach to orphan works

On Tuesday I posted The Authors Guild, orphan works and civil rights? (Authors Guild v Hathitrust pt. 3) in which I addressed the arguments made by on behalf of the plaintiffs on appeal in Authors Guild v. Hathitrust. The Authors Guild takes the rather extreme position that:

“Any iteration of the OWP [orphan works program] under which copyrighted works are made available for public view and download violates the Copyright Act.” (Authors Guild Ap. Br. page 13; see generally pages 13).

Their appeal brief poses the question:

“Is it ever lawful to take an entire copyright-protected book and make it widely available for display and download without permission?” (Authors Guild Ap. Br. page 13).

I believe that the answer to that question is YES. On Tuesday I gave an example of an individual orphan work made accessible on the  Civil Rights Movement Veterans Website. Today I want to extend that discussion to an entire orphan works program. 

Trove, The Australian National Library and Orphan Works

Trove is the Australian National Library’s primary vehicle to assist users to access digital content held by collecting institutions across Australia. Trove is used by tens of thousands of Australians every day.

In July 2008, Trove opened up Australian newspaper articles published from the 1800s to 1955 to full-text searching. Screen Shot 2013-05-01 at 11.00.24 AM

Screen Shot 2013-05-01 at 10.59.49 AM

Trove goes beyond the 1955 by agreement with newspaper publishers, but for anything prior to 1955 the NLA and the libraries in its network work on the assumption that there is no requirement to obtain permission. (See e.g.,  Selection Policy (http://www.nla.gov.au/content/selection-policy) “The newspapers must not have copyright restrictions i.e. anything before 1955 is suitable”).

An implicit orphan works policy

In selecting 1955 as the cut-off date, the NLA has adopted what they would call a sensible risk management policy and I would call an orphan works policy. Under Australian copyright law (Australian Copyright Act 1968) the date on which the copyright in a literary work expires depends on the date of publication and the date of the death of the author.

  • If a literary work was published in the lifetime of the author, and that author died before January1, 1957, the work is out of copyright.
  • Any literary work published in the lifetime of an author who died on or after January1, 1957 and before 2005, will be out of copyright 50 years after that author’s death.
  • If a work was first published anonymously and the identity of the author cannot be ascertained on reasonable inquiry, the period of copyright protection is measured from the year of publication and not the year of the author’s death. (See Section 34 of the Australian Copyright Act 1968).
  • The law relating to photographs in Australia is a little easier: any photograph taken before 1955 is in the public domain. (See Section 33 of the Australian Copyright Act 1968).

Newspapers contain works by many different authors. For each individual article in a newspaper, the period of copyright protection is measured from the death of the author, even if the author assigned the copyright to the publisher.

What does all this mean for library digitization?

In 2013 the odds are pretty good that anything published in, say 1950, is in the public domain in Australia. But, if a work was published in 1953 and the author died in 1973, then the copyright would not expire until 2023.

Before he retired in 2011 Warwick Cathro was the Assistant Director-General, Resource Sharing and Innovation at the NLA. Warwick was a pioneer in the delivery of innovative network services to the Australian library community and is considered the founder of Trove. I spoke to Warwick about the NLA’s approach to newspaper digitization and he said:

“The NLA thus took a “risk management” approach to copyright issues in its newspaper digitization program.

We did this because of the manifest public benefit in digitising this content. We never attempted to clear copyright in individual articles; how could we ever do this for tens of millions of articles?

To my knowledge, in the five years since this content has been made available online, not one copyright owner has objected. If any were to do so the NLA would discuss the purpose of its digitization program and seek permission to include the creator’s work in the newspaper database. If this could not be negotiated the NLA would take down the item or article in question.”

Of course, it is much easier to get your lawyers to sign off on this kind of sensible risk management approach that respects the wishes of authors and maximizes public access to knowledge in a jurisdiction without statutory damages.

Orphan works projects are not just a stalking horse for Silicon Valley internet companies, nor are they simply the whimsical playthings of obscure institutions happy to work in legal grey areas. Making orphan works available to the public should be one of the core missions of American libraries. Libraries could pursue this mission more easily if statutory damages were abolished and pragmatic risk management prevailed over the misinformed notion that the purpose of copyright is prevent unauthorized use solely for the sake of prevention.

The Authors Guild, orphan works and civil rights? (Authors Guild v Hathitrust pt. 3)

Introduction and Necessary Disclaimer 

This one of a series of posts concerning the Authors Guild v. Hathitrust case, specifically these posts take the form of commentary on the Authors Guild Appeal Brief (February 25, 2013). Although I am one of the authors of the Digital Humanities and Law Scholars Amicus Brief, the views expressed on this site are purely my own. My comments on the Authors Guild’s Appeal Brief will not be comprehensive, rather, my aim is to review the aspects of the brief that I found interesting.

Today’s topic …

What is the Authors Guild really saying about orphan works?

In some ways, the Authors Guild is the victim of its own success. The Authors Guild was quick to discover some defects in the way that the University of Michigan was determining orphan works status when the project was first announced in 2011. Exposure of those issues led to the suspension of that project before any single work was distributed to the public as an orphan work. The orphan works project might come back in some form at some stage, but at the moment there is no way for the court to know what kind of orphan works project it was being asked to rule on or who it would effect.

In its appeal brief, the Guild responds to this predicament by arguing that the orphan works part of its case is ripe for adjudication because the details simply don’t matter – any orphan works project would be unlawful! See e.g.

“Any iteration of the OWP under which copyrighted works are made available for public view and download violates the Copyright Act. The pure legal question that was presented to the District Court is the same as it will always be: Is it ever lawful to take an entire copyright-protected book and make it widely available for display and download without permission?” (Authors Guild Ap. Br. page 13; see generally pages 13-14).

And later

“Plainly, existing copyright law does not permit the copying and distribution of the entirety of copyright-protected works to tens of thousands of users, irrespective of whether it might be difficult to locate the rights-holder.” (Authors Guild Ap. Br. page 17)

I don’t know how the defendants will respond to this argument and it is not an issue that fits within the scope of the Digital Humanities Amicus brief. Rather than diving into the legal arguments as to when and why the display of orphan works would be fair use, I thought it might be illuminating to consider an example.

Orphan works example: the Civil Rights Movement Veterans Website

On April 12, 2012, I attended the opening session of the Berkeley Law School’s “Orphan works and Mass Digitization” conference. The topic of the first panel was “Who wants to make use of orphan works and why.” In the course of that panel, Bruce Hartford, the webmaster of the Civil Rights Movement Veterans Website told a story so fascinating it is worth setting in full.

The Civil Rights Movement Veterans Website recounts the history of the civil rights movement:

“This website is created by Veterans of the Southern Freedom Movement (1951-1968). It is where we tell it like it was, the way we lived it, the way we saw it, the way we still see it. With a few minor exceptions, everything on this site was written, created, or spoken by Movement activists who were direct participants in the events they chronicle.” (http://www.crmvet.org)

Much of the material on the Civil Rights Movement Veterans website is used with permission or requires no permission because it is in the public domain. However, according to Hartford, that still leaves a significant proportion of material that he would classify as orphan works. When Hartford uses the term orphan works he means (i) material that was originally copyrighted by an organization which no longer exists and made no provision for its copyrights upon dissolution; (ii) material where the copyright owner cannot be found; (iii) or material where the identity of the copyright owner was always unknown.

The photo below of James Forman (October 4, 1928 – January 10, 2005), an American Civil Rights leader active in the Student Nonviolent Coordinating Committee.

foreman copy

As Hartford described it:

“The camera was smuggled into the jail, given to an unknown prisoner who clicked the button and took the picture. Under copyright law, as I am told, the copyright to the picture is owned by the unknown prisoner who pressed the button on the camera, who then gave it back to whoever smuggled the camera into the prison, to smuggle it out of the prison.

 

Now I know this is off topic, but I am just going to say, some of us are a little annoyed about this stupid rule that the person who presses the button totally owns the rights and those of us who are risking our lives to do whatever it was that they were taking the picture of have no say so in whatever happens to that and they can make lots of money on it and we can look and weep.”

Take another look

Take another look at the photo of James Forman, consider what it means to the Civil Rights Movement Veterans Website and ask yourself, can it really be true, as the Authors Guild state in their brief, that “[p]lainly, existing copyright law does not permit the copying and distribution of the entirety of copyright-protected works to tens of thousands of users, irrespective of whether it might be difficult to locate the rights-holder.” (Authors Guild Ap. Br. page 17)?

Not everything is the same as everything else – Authors Guild v Hathitrust (pt. 2)

Introduction and Necessary Disclaimer 

This one of a series of posts concerning the Authors Guild v. Hathitrust case, specifically these posts take the form of commentary on the Authors Guild Appeal Brief (February 25, 2013). Although I am one of the authors of the Digital Humanities and Law Scholars Amicus Brief, the views expressed on this site are purely my own. My comments on the Authors Guild’s Appeal Brief will not be comprehensive, rather, my aim is to review the aspects of the brief that I found interesting.

Today’s topic …

Not everything is the same as everything else 

Legal argument is art of analogizing and distinguishing, drawing out the implications of things already decided in ways that suggest the a favorable outcome for matters still in dispute. Thus, in copyright cases it is quite common to read that x (new thing) is the same as/totally different from y (old thing). The Authors Guild’s brief engages in quite a bit of this kind of argument, but mostly without saying so explicitly. In particular, their brief contains three examples of false equivalence that simply don’t add up.

  1. The Authors Guild implicitly suggests that the defendants’ orphan works project is the same as the Authors Guild’s own proposal to deal with orphan works in Google Book Search Settlement. It isn’t.
  2. The Authors Guild argues that the defendants’ orphan works project is a substitute for orphan works legislation. It isn’t.
  3. The Authors Guild brief proceeds as thought library digitization were the same as library photocopying. It isn’t.

The Universities’ Orphan Works Project v. the Google Book Search Settlement

Most of the Authors Guild’s ink is spilt on the universities’ proposed orphan works project (OWP). The idea behind the defendants’ OWP appears to be that out-of-print books published in the U.S. between 1923 and 1963 should be made available for educational use if the rights holders cannot be reasonably be located. The University of Michigan proposed a method to automate the identification of orphan works for this purpose in 2011. However, the exact nature of this particular project is still yet to determined because after the Authors Guild filed suit against the HathiTrust et al, the University of Michigan announced that the OWP would be temporarily suspended. The University of Michigan candidly admitted that the procedures used to identify orphan works had allowed some works to make their way onto the Orphan Works Lists in error.

The Authors Guild Appeal Brief contains the implicit suggestion that the defendants’ OWP is the same as the audacious exploitation of orphan works that the Authors Guild itself proposed under its Settlement Agreement with Google.

It is true that, as noted at page 10 of the Guild’s Appeal Brief, “a mechanism to help resolve the orphan works issue was one of the key aspects of the attempted settlement of the Google Books case”.

It is also undeniable that Judge Chin commented “the establishment of a mechanism for exploiting unclaimed books is a matter better suited for Congress than this Court”. (Authors Guild v. Google, Inc., 770 F. Supp. 2d 666 (S.D.N.Y. 2011))

But Judge Chin was evaluating the fairness of the private settlement between Google and the Authors Guild, he was not commenting on the question of whether the display of any orphan works under any circumstance could be fair use, nor was he reviewing anything remotely like the libraries much more limited orphan works program.

The Authors Guild proceeds as though the modest orphan works program announced by the university defendants is the same in substance as the universal bookstore rejected by the Judge Chin in 2011. (See e.g., Authors Guild, page 10 “Unhappy with Judge Chin’s decision, [University of Michigan] decided to take the law into its own hands by unilaterally initiating its own program.”) This strikes me as false equivalence.

Under the default settings of the now defunct settlement (proposed 2008, amended 2009, rejected 2011) Google would have been allowed to display up to 20% of a non-fiction work to the entire world and to sell books through consumer purchases and institutional subscriptions. Funds from the sale of orphan works were to held by a ‘book rights registry’ for safe keeping and eventual distribution to worthy causes. [Under the original Settlement Agreement, the revenues attributable to orphan or unclaimed works would have flowed in part to the ‘book rights registry’ and in part to registered authors and publishers.]

The details of the OWP that the defendants may or may not eventually undertake are unclear, but their public statements indicate that any such project would be grounded on non-commercial, limited, educational use. Moreover, the settlement would have treated all books whose copyright owners who failed to notify the registry of their interests as orphan works, the University of Michigan is working on a method to reliably determine a much smaller subset of true orphan works.

Whatever it turns out to be, the Universities’ orphan works project will not be the same as the Authors Guild’s own proposal to deal with orphan works in Google Book Search Settlement.

The Universities’ Orphan Works Project v. Orphan Works Legislation

The Authors Guild Appeal Brief also conflates the universities’ OWP with various legislative solutions that have been proposed over the years in relation to the widely recognized orphan works problem. See for example Authors Guild Ap. Br. at page 15 “Despite clear indications by courts and the Copyright Office that the treatment of orphan works should be left to Congress, the Libraries insist that the OWP is legal.” (There is another example on page 10).

Does it really make sense that Congress’ failure to comprehensively or partially legislate a solution to the problem of orphan works means that the use of orphan works is never allowed under any circumstances, no matter how limited or irrespective of the reason? Congress could act to make out of print works universally available under terms similar to the Authors Guild’s proposal in the Google Book Search settlement, but so what? The mere fact that Congress could in theory set out a system that is broader than the limited scope for orphan works display that would be viable as fair use does not mean that there is no fair use.

Whatever it turns out to be, there is no basis to think that the university defendants’ orphan works project is a substitute for orphan works legislation.

Library Digitization v. Library Photocopying

If you proceed from the assumption that all unauthorized uses of a book are piracy then it makes sense that every new technology is just a new version of the photocopier. The Authors Guild Appeal Brief certainly can certainly be read as adopting the latter view.

The brief argues that “[t]he mechanical conversion of printed books into digital form is not transformative because it does not add any ‘new information, new aesthetics, [or] new insights and understandings,’ to the books.” (citing Pierre Leval, Toward a Fair Use Standard, 103 Harv. L. Rev. 1105, 1111 (1990).) True, there is solid authority that photocopying and cable retransmission are not per se transformative (i.e., without looking at the reasons), but to suggest that library digitization offers no new insights is unsustainable.

Library digitization raises several different issues depending on the purpose behind that digitization and the uses that are subsequently made of the digitized texts. Library digitization could be motivated by any or all of the following:

  1. to preserve existing volumes
  2. to facilitate text-mining, data analysis and digital searching of the contents of books
  3. to facilitate access to electronic versions of books

The legal issues relating to each of these genres must be considered separately, but the Authors Guild’s brief muddles them altogether. Digitization does look a bit like other forms of copying if the motivating purpose is access or display of expressive works (i.e., #3 above). However, the argument in favor of a limited, non-commercial and education focused orphan works project turns not on transformative use, but on other considerations such as the lack of market harm [See Jennifer M. Urban, How Fair Use Can Help Solve the Orphan Works Problem (June 18, 2012)].

Likewise, the argument in favor of library digitization to facilitate disabled access is much broader than the details of the underlying technology. Whether we use the label transformative or not, this is clearly a favored purpose under the first fair use factor. The provision of equal access to copyrighted information for print-disabled individuals is mandated by the Americans with Disabilities Act (ADA). The HathiTrust provides print-disabled individuals with access to millions of items within library collections, whereas in the past they merely had access to a few thousand at best. “Making a copy of a copyrighted work for the convenience of a blind person is expressly identified by the House Committee Report as an example of a fair use, with no suggestion that anything more than a purpose to entertain or to inform need motivate the copying.” (Sony Corp. of Am. v. Universal City Studios, Inc, 464 U.S. 417, 455 n.40 (1984)).

The claim that library digitization is just like photocopying and does not offer any new insights crumbles completely when one considers the non-expressive uses such digitization makes possible. Library digitization makes it possible to extract meta-data from books and to create a useful search engine. Search indexing, text-mining and other computational uses of text could not be more different from mere photocopying; the “new information” and “new aesthetics” they offer include:

  • Text-based searching
  • Research on the structure of language
  • Research on the use of language.

The database as a whole serves a different purpose than each of the constituent works that have been scanned and indexed. The individual works provide content to readers, they convey the authors original expression. The database as a whole provides a means of searching for and identifying books or analyzing the language within books.

Labels like transformative use and nonexpressive use can be helpful in grouping like cases together, but they can also be distracting. The issue of fair use is directly tied to a purposive reading of the Copyright Act and the purpose of copyright is clearly articulated in the U.S. Constitution—“[t]o promote the Progress of Science and useful Arts. . . .”  As the Supreme Court stated in Campbell, the “central purpose” of the fair use investigation is to see, “whether the new work merely supersedes the objects of the original creation, or instead adds something new, with a further purpose or different character, altering the first with new expression, meaning, or message…”

The plaintiffs argue that library digitization is utterly untransformative, but in fact, digitization enabling book search and text-mining clearly leads to “new information, new aesthetics, new insights and understandings.”

For example, as we explained in the Digital Humanities Amicus Brief:

“Google’s “Ngram” tool provides another example of a nonexpressive use enabled by mass digitization—this time easily visualized. Figure 1, below, is an Ngram-generated chart that compares the frequency with which authors of texts in the Google Book Search database refer to the United States as a single entity (“is”) as opposed to a collection of individual states (“are”).

is_are_take2

As the chart illustrates, it was only in the latter half of the Nineteenth Century that the conception of the United States as a single, indivisible entity was reflected in the way a majority of writers referred to the nation.  This is a trend with obvious political and historical significance, of interest to a wide range of scholars and even to the public at large.  But this type of comparison is meaningful only to the extent that it uses as raw data a digitized archive of significant size and scope. To be absolutely clear, 1) the data used to produce this visualization can only be collected by digitizing the entire contents of the relevant books, and 2) not a single sentence of the underlying books has been reproduced in the finished product. In other words, this type of nonexpressive use only adds to our collective knowledge and understanding, without in any way replacing, damaging the value of, or interfering with the market for, the original works.”

Library digitization is not the same as library photocopying.

The digital humanities is alive and well in South Bend, Indiana

I will be at Notre Dame on Friday, April 12, to give a lunchtime talk  to the Working Group on Computational Methods in the Humanities and Sciences on copyright, text analysis, and the legal issues involved in digital humanities research. I’ll be speaking at an event organized by Assistant Professor Matthew Wilkens who works on contemporary fiction, literary theory, digital humanities, and social studies of science.

Copyright law is based on a set of rules developed in the 18th Century to regulate the printing press. Today’s copyright law still carriers with it the legacy of print-era assumptions that have been profoundly disturbed by the digital economy. My talk will focus on the impact of successive waves of technology on copyright law and explain why the non-expressive use of copyrighted works by copy-reliant technologies presents a profoundly new issue for copyright law.

My interest in the digital humanities grew out of earlier work on Internet search engines and plagiarism detection software. Text mining software and other copy-reliant technologies do not read, understand, or enjoy copyrighted works, nor do they deliver these works directly to the public.  They do, however, necessarily copy them in order to process them as grist for the mill, raw materials that feed various algorithms and indices.

Logistical details on the talk are available here and here.

 

The Imaginary Conflict Between Fair Use and International Copyright Law

Introduction

Fair use opponents and skeptics often question whether an open standard that relies on judicial application is compatible with international treaty obligations. In my view, outside the European Union, there is no merit in that contention. This post is my first attempt to clarify why there is no conflict between fair use and international copyright law.

Copyright is international

Since 1886, national copyright laws have been regulated by international agreements. In the era of globalization it is not surprising that copyright law is now covered by a multitude of overlapping international, regional and bilateral agreements.

The foundational international agreement concerning copyright law is the Berne Convention for the Protection of Literary and Artistic Works of 1886. The Berne Convention has been revised many times and is now supplemented by a host of international trade agreements including the TRIPs Agreement (Agreement on Trade-Related Aspects of Intellectual Property Rights) (part of the WTO framework), the WIPO Copyright Treaty and various regional (e.g., NAFTA) and bilateral free trade agreements (e.g., the Australia-US FTA). These agreements provide for mutual recognition of copyrights and establish minimum standards for copyright protection.

Questioning Fair Use

Fair use opponents and skeptics often question whether an open standard that relies on judicial application is compatible with the so-called “three step test” for limitations and exceptions contained in Berne and many subsequent agreements.

The Berne convention first came into being in 1886, but prior to 1967 the rights guaranteed by the convention were expressed quite specifically – one article dealt with works “published in the newspapers or periodicals”, another dealt with the “the exclusive right of authorizing the reproduction and public representation of their works by cinematography.” (Article 9 and Article 14). In 1967 the members of the Berne Convention adopted a very broadly expressed reproduction right in Article 9(1) that states simply:

“Authors of literary and artistic works protected by this Convention shall have the exclusive right of authorizing the reproduction of these works, in any manner or form.”

As a counterweight to this omnibus reproduction right, the new Article 9(2) provided:

“It shall be a matter for legislation in the countries of the Union to permit the reproduction of such works in certain special cases, provided that such reproduction does not conflict with a normal exploitation of the work and does not unreasonably prejudice the legitimate interests of the author.”

Those who doubt the legitimacy of fair use under the Berne Convention argue that fair use is too broad and uncertain and is thus not properly confined to “certain special cases” as Article 9(2) requires. In my view, this argument falls somewhere along the continuum between misguided and mischievous. Fair use is fundamentally consistent with the three step test as a matter of international law; moreover, so long as the U.S. retains the fair use doctrine, no other country could seriously be challenged for its adoption of fair use as a matter of international politics. This second proposition is so obvious that I will confine the remainder of my observations to the more interesting legal question.

The three step test is broadly applicable standard, consider …

   1. Drafting History

The drafting history of Berne Article 9(2) reveals that it was not intended as a rigid prohibition on limitations and exceptions to copyright, but rather as an abstract open formula capable of encompassing a wide range of exceptions. (See generally, Martin Senftleben, Copyright, Limitations, and the Three-Step Test: Analysis of the Three-Step Test in International and EC Copyright Law, Kluwer Law International, 2004.) Martin Senftleben has made a detailed study of the history of the three step test, beginning with its adoption in 1967. Senftleben’s research shows that an early draft of 9(2) used the language “in certain particular cases where the reproduction is not contrary to the legitimate interests of the author.” This text was modified at the suggestion of the United Kingdom delegation to the now familiar language — “in certain special cases where the reproduction does not unreasonably prejudice the legitimate interests of the authors.”

Although fair dealing in the U.K. by the late 1960’s was arguably narrower than fair use in the U.S., it was nonetheless an abstract standard (applied to particular circumstances) requiring judicial application and development. It is inconceivable that the U.K. intended to abandon fair dealing when it suggested that limitations and exceptions be limited to “certain special cases”. Read as a single sentence, Article 9(2) is a general statement that does as much to enable limitations and exceptions as it does to confine them. As is so often the case in international agreements, this generality was necessary in order to reconcile the many different types of exceptions various nations had already adopted.

   2. Subsequent International Agreements

Since the Berne Convention, several different versions of the three step test have been incorporated into international agreements such as TRIPs, the WIPO Copyright treaty, NAFTA and U.S. Free Trade Agreements with Australia, Bahrain, Chile, Jordan, Morocco, Singapore, and South Korea.

In some ways this proliferation seems to confuse the question of how the standard should be interpreted, but in other ways it is also clarifying. Admittedly, variations on the exact text of the three step test are confusing. Article 13 of the TRIPs Agreement of 1994 (part of the World Trade Organization) contains a version of the three step test which tacks the language Berne except that rather than “permit[ing] reproduction in certain special cases”, TRIPs instructs member nations to “confine” any limitations and exceptions to “certain special cases”. TRIPs also changes Berne’s reference to “the author” to “the right holder”. These transformations are repeated in Article 10 of the WIPO Copyright Treaty of 1996. TRIPs also contains a slightly different three step test for trademark law and another for patent law but the drafting history offers no explanation as to the motivation or significance of these subtle differences. (See, Annette Kur, Oceans, Islands, and Inland Water- How much Room for Exceptions and Limitations under the Three-step Test?)

If there was ever any doubt as to the flexibility of the three step test framework, that doubt should have been dispelled by Agreed Statement Concerning Article 10 WIPO Copyright Treaty. That statement reads in part:

It is understood that the provisions of Article 10 permit Contracting Parties to carry forward and appropriately extend into the digital environment limitations and exceptions in their national laws which have been considered acceptable under the Berne Convention. Similarly, these provisions should be understood to permit Contracting Parties to devise new exceptions and limitations that are appropriate in the digital network environment.

   3. State practice

The initial three step test was adopted before the U.S. joined the Berne Convention (this only happened in 1989). The U.S. made a number of changes to its law to fit into the Berne framework, principally concerning notice and registration requirements. The fair use doctrine does not appear to have been considered as an obstacle to Berne compliance at the time and it is hard to imagine that the U.S. would have agreed to the convention if it believed that such a central aspect of its copyright law was not Berne compatible. It is even harder to imagine that the U.S. would actively promote the incorporation of a three step test into TRIPs in 1994, the WCT in 1996 and FTAs with Australia, Bahrain, Chile, Jordan, Morocco, Singapore, and South Korea if the fair use doctrine really presented a fundamental conflict.

How should the three step test be applied?

The three-step test is an open-ended norm. There is nothing in the history or text of Berne, TRIPs or the WIPO Copyright Treaty that indicates any intention to abandon common law adjudication for limitations and exceptions. Even the one WTO panel decision that is frequently cited for a restrictive view of the words “certain special cases” also notes: “However, there is no need to identify explicitly each and every possible situation to which the exception could apply, provided that the scope of the exception is known and particularised. This guarantees a sufficient degree of legal certainty.” (Panel Rep. of 15 June 2000, United States-Article 110 (5) of the US Copyright Act, WT/DS160/R. at 6.108)

In a complex world, rules are not the only path to certainty.

In a fair use system, the contours of limitations and exceptions to exclusive rights are developed through adjudication – as the jurisprudence matures, the degree of legal certainty increases. Rule-based limitations and exceptions are quite vulnerable to technological rigidity and their application can hinge on arcane debates over taxonomy – these features can make rules perennially uncertain.

What does it mean to be special?

Like the fair use doctrine itself, the three step test about harm and justification. The three step test allows members to create their own limitations and exceptions to copyright so long as the limitation does not conflict with a normal exploitation of the work and does not unreasonably prejudice the legitimate interests of the author (Berne Article 9(2)). These requirements are preceded in Berne by the words “in certain special cases”, but these words are as much descriptive as they are limiting. Copyright exceptions that do not conflict normal exploitation and do not unreasonably prejudice the legitimate interests of the author are not the norm, they are “special cases.”

Consider the implications of reprinting a novel. Ordinarily copyright law gives the novelist the exclusive right to reproduce her own work and this exclusive right is the means by which profit is obtained – she can sell that right, or sell copies of the work – this potential profit is the incentive that urges her to write the novel in the first place. Usually, a law that allows others to copy the novel without permission directly challenges the novelists control over her work and her ability to monetize her creation. But there are cases where copying should be allowed because it does not conflict with a normal exploitation of the work and does not unreasonably prejudice the legitimate interests of the author – in the classic printing-press paradigm of copyright law these cases are unusual, some would even say “special”. In the printing press paradigm, justified exceptions allowing wholesale reproduction would be rare. In the digital age justified exceptions to the reproduction right are not quite as rare, but they are still “special” in the relevant sense. Many private uses, many uses already where access has already been paid for, de minimus uses, fractional uses for socially beneficial purposes and non-expressive uses should all readily be seen as special.

Recasting 107 in the language of Berne

To understand the consistency of fair use with the three step test, consider how the former doctrine can be reformulated in the language of the latter. There is nothing magical or sacrosanct about the particular language that the U.S. used to try to codify fair use in 1976. A Berne compliant fair use provision could just as well read:

“… the fair use of a copyrighted work in certain special cases, such as news reporting, quotation, commentary, criticism and analysis, scholarly research, is not infringement. In determining whether a use is a fair use a court shall consider: (1) whether the use conflicts with a normal exploitation of the work and thus tends to substitute for the work of the copyright owner; (2) whether the use is likely to prejudice the legitimate copyright interests of the copyright owner to a degree that is unreasonable in light of the purpose and nature of the use; and (3) whether the extent of the use is reasonable in light of the purpose and nature of the use.”

Recommendation

Countries like Australia that are currently considering the adoption of their own fair use doctrine should not be deterred by an imaginary conflict between fair use and international copyright law.

The Lost Tradition of Fair Use in English and Colonial Copyright Law, Comments on Ariel Katz, Fair Use 2.0

Ariel Katz, Fair Use 2.0: The Rebirth Of Fair Dealing In Canada (Draft, Jan. 24, 2013)

In previous work I have highlighted the English origins of the modern fair use doctrine in abridgement cases from 1710 to 1841 (“The Pre-History of Fair Use” (2011) 76:4 Brooklyn Law Review 1371). The question that I failed to address is, if fair use was part of the English copyright law tradition, why do England and her former colonies now adhere to a much narrower concept of fair dealing? Ariel Katz’s new paper gives us some answers this question.

Conventional wisdom holds that in Commonwealth jurisdictions like England, Australia, New Zealand and Canada fair dealing cannot apply beyond the explicitly enumerated purposes. In the U.S. by contrast, the statutory purposes are just illustrations. Thus we are left with (in Katz’s words) an “omnipresent flexible fair use regime in the United States, and a seemingly rigid and restrictive fair dealing tradition in the Commonwealth countries.”

Katz’s bold claim is that the conventional wisdom is wrong!

“…the history of fair use and fair dealing and shows that … the enactment of the Imperial Copyright Act of 1911 [was] not designed to cause any major alteration in the common law of fair dealing, and the explicit recognition of five enumerated purposes in the (then) newly-enacted fair dealing provision was not intended to limit the principle of fair dealing exclusively to those five purposes.” (page 3)

Katz makes a strong argument that University of London Press, Ltd. v. University Tutorial Press, Ltd. (1916), 2 1916 Ch 601, the first reported case on the newly enacted English “fair dealing” provision of the 1911 Copyright Act may have been misread over the years. But I think that the strongest parts of the paper are his treatment of the legislative history of the 1911 Act and its contemporary reception.

The legislative history of the Imperial Copyright Act of 1911

The 1911 Act provided that: “Any fair dealing with any work for the purposes of private study, research, criticism, review, or newspaper summary” shall not constitute an infringement of copyright. Arguing that there was indication in the legislative history that the 1911 Act was meant to curtail fair use or freeze it in time. Katz has studied the introduction of the bill in Parliament and the House of Lords, he notes (at page 26) that

“If the Bill contemplated major reform with respect to fair dealing, it would have been expected that such change would be mentioned, but it was not. Nor did Viscount Haldane, who introduced the Bill to the Lords, mention any contemplated change with respect to fair dealing.”

Quite the contrary, Viscount Haldane in the House of Lords stated:

“All we propose to do is to declare that for the future the principle of fair dealing which the Courts have established is to be the law of the Code. … The principle of fair dealing is a principle which the Courts have applied with the greatest care. … All that is done here is to make a plain declaration of what the law is and to put all copyright works under the same wording.”

Why codify fair dealing if no change to fair use was intended?

Katz notes (page 25) that

“a simple explanation [for the codification of fair dealing] might be that since the 1911 Act was mainly a project of consolidation of different acts and codification of different common law rules, it seemed prudent not to leave fair use without any statutory basis. … Another explanation … if the Act only recognized the expansion of the copyright but remained silent about limitations to those expanded rights, court might have interpreted that as a signal that Parliament had decided to abolish fair use.”

Katz argues that it is even possible that the fair dealing provisions in the 1911 Act may even have been an attempt to expand fair use.

“… it is possible that the purpose of specifying the five categories was not only to remove any doubts that fair dealing applied to those already recognized in the case law, but also to ensure that it applied to those who lacked solid grounding in the case law. In particular, the addition of ‘newspaper summary’ and ‘private study’, categories that had no direct precedent in the case law, can support this explanation.” (pages 25-26).

Reaction of Treatise Writers

Katz also does a wonderful job of surveying the contemporary reaction of copyright treatise authors to the 1911 Act. He summarizes (at page 30)

“… if by enacting the fair dealing provision Parliament had intended to modify the existing doctrine of fair use by confining it to five enumerated categories exclusively, most of the contemporaneous commentators failed to notice that intention.”

Some examples lifted from Katz’s paper:

J.M. Easton, Copinger on Copyright, 5th edition.

“[a]ny fair dealing, with, any work for the purposes of private study, research, criticism, review, or newspaper summary is also expressly permitted by the Act.”

“fair dealing for other purposes has always been … permitted and, presumably, it was not intended to cut down the rights of fair user previously enjoyed under the old law.”

JB Richardson, The Law of Copyright (London: Jordan & Son, 1913)

“The passing of The Copyright Act, 1911, has completely recast the Law of Copyright, at any rate those parts which depend primarily on Statute Law, such as the term of protection and ownership of copyright. Only those parts of the law which are practically judge-made—such as the questions as to infringement by a new work other than an exact copy—have remained to any great extent unaltered, and even they are not untouched.”

LCF Oldfield, The Law of Copyright (London: Butterworth & Co., 1912)

“[w]hat is fair dealing with a work depends upon the circumstances of each particular case”

How did the restrictive view of the 1911 Act come to dominate?

In terms of copyright law treatises, Katz’s research indicates that “[t]he view that Parliament had intended to restrict fair dealing to the five enumerated purposes began appearing later. … In 1927, the sixth edition of Copinger was published. This edition was no longer authored by Easton, but penned by F. E. Skone James and published by a different publisher.” (page 30)

Katz argues that University of London case of 1916 which is treated as confirming the narrow scope of fair dealing has long been misunderstood. If he is correct, what deserves further exploration is why such a misunderstanding should have taken such firm hold of copyright law in England, Australia, New Zealand … and until recently, Canada.

 

Embracing the Digital Economy – the 2013 Australian Digital Alliance Copyright Forum

The Australian Digital Alliance will hold its 2013 annual copyright forum, ‘Embracing the Digital Economy: creative copyright for a creative nation’. The forum will considers how Australia’s copyright framework fits in with the ‘digital world’. A timely contribution given that the Australian Law Reform Commission is in the middle of an Inquiry into Australia’s copyright framework, to determine whether existing exceptions are adequate and appropriate in the digital environment.

Australia’s copyright framework has not kept pace with technology or society. The digital age has profoundly changed to the nature of creation and distribution. The divide between ‘producers’ and ‘consumers’ has been blurred in some cases and eliminated all together in others. The production and distribution of creative works is more democratic and more chaotic than ever before.

The ADA forum explores a variety of new technologies, business models and education and cultural services being provided online, and how they fit into our existing copyright framework.

I am honored to be presenting one of the keynote addresses at this important event. The other keynote will be New Zealand internet law expert and District Court Judge David Harvey. I plan to discuss the results of my empirical study of fair use litigation, Predicting Fair Use.

This year’s forum takes place at the National Portrait Gallery in Canberra on Friday 1 March 2013 from 8:30am – 4:30pm, with pre-forum drinks on the evening of Thursday 28 February at the National Library of Australia.

More info is available at http://digital.org.au/content/2013-australian-digital-alliance-copyright-forum

2012 Global Congress on Intellectual Property and the Public Interest #gcongress

The Global Congress on Intellectual Property and the Public Interest is part of an attempt to define a positive agenda for policy reform in IP by building a global network of scholars and advocates.

I attended the Congress as part of the “Global Network on Copyright Limitations and Exceptions” an international group of copyright law experts that is drafting a set of exceptions and limitations to copyright law that (ideally) any country to use to modernize its law, comply with international obligations such as the Berne Convention and TRIPs and take advantage of the under-utilized flexibility those agreements allow. Our objective is

“to promote discussion of employing ‘open-ended’ limitations in national copyright legislation” and to support “the development of binding international agreements providing for mandatory minimum limitations and exceptions.”

Our view is that balancing mechanisms and user rights are an integral part of the copyright policy, not an afterthought.

We were also extremely privileged to have a whole day of private meetings with Marcos Souza, Head of Copyright, Brazil Ministry of Culture. In this meeting members of the L&E Network discussed proposals to amend Brazil’s copyright with Brazilian copyright scholars and officials.

  • Model Flexible Use Clause, Version 4.0 (PDF)
  • Introduction to Text (PDF)
  • Appendix I: Presumptively Lawful Purposes (PDF)
  • Appendix II: Examples of Flexible Limitations and Exceptions from Existing and Proposed Laws (PDF)
  • Appendix III: Responding to Frequently Asked Questions About Flexible Use Provisions (PDF)

Sean Flynn, Michael Carroll, Peter Jaszi and Meredith Jacobs from American University have done a fantastic job coordinating this.

Other L&E Network members include: Ahmed Abdel Latif, Alberto Cerda Silva, Allan Souza, Andrew Rens, Bruno Lewicki, Carlos Affonso, Carolina Botero, Caroline Ncube, Denis Barbosa, Gwen Hinze, Hong Xue, Jennifer Urban, Jonathan Band, Leon Felipe Sanchez Ambia, Matthew Sag, Niva Elkin-Koren, Oliver Metzger, Pedro Mizukami., Pedro Paranagua, and Pranesh Prakash.

These people are all awesome!

The Authors Guild Does Not Speak for Academic Authors

Academic authors are being asked to stand by an watch as the Authors Guild litigates against their wishes and interests, but supposedly on their behalf.

This hubris is not exactly unprecedented. The plaintiffs in Hansberry v. Lee 311 U.S. 32 (1940) sought to enforce a racially restrictive covenant on behalf of a broad class of landowners including African-American’s who would be harmed by enforcement and whites who simply objected. Like the land-owners in Hansberry many academic authors disagree with Authors Guild’s crusade against book digitization. The Supreme Court did not allow the plaintiffs to hijack the class in Hansberry, hopefully the Second Circuit will not allow the Authors Guild to do so in Authors Guild v. Google. 

Pamela Samuelson and David Hansen (both of the University of California, Berkeley – School of Law) have filed a very important amicus brief on behalf of over 150 academic authors* in the Second Circuit Court of Appeals in Authors Guild v. Google. (Available on ssrn)

The brief in support of defendant-appellant Google argues that class certification should have been denied by the District Court because the named plaintiffs don’t represent the interests of academic authors who comprise a large proportion of the class.

The Authors Guild cloaks its lawsuit in the mantel of authorship, yet in reality it represents only a small fraction of the the class it has constructed. Most of the books that Google scanned from major research library collections were written by academics.

The basic problem is that the three individual plaintiffs who claim to be class representatives are not academics and do not share the commitment to broad access to knowledge that predominates among academics.

The plaintiffs’ request for an injunction to stop Google from making the Book Search corpus available would be harmful to academic author interests. The only way for the interests of academic authors to be vindicated in this litigation, given the positions that the plaintiffs have taken thus far, is for Google to prevail on its fair use defense and for the named plaintiffs to lose.

As we explained in the Digital Humanities Amicus Brief in the district court, “[m]ass digitization, like that employed by Google, is a key enabler of socially valuable computational and statistical research (often called “data mining” or “text mining”),”  which allows researchers to discover and use the non-copyrightable facts and ideas that are contained within the collection of copyrighted works themselves.

The Authors Guild are bad representatives of the interests of academic authors because

  1. Academic authors would generally prefer their books be findable using Google Book Search.
  2. If the Authors Guild wins, academic authors will be deprived of a valuable resource, in the form of the Google Book Search Engine and the HathiTrust Digital Library.
  3. If the Authors Guild wins, text mining — the most basic tool of the Digital Humanities — will have been declared to be prima facie illegal.
* I was one of the signatories.

 

 

HathiTrust and the Future of Orphan Works

The U.S. Copyright Office is taking another look at the problem of orphan works under U.S. copyright law.

As the Copyright Office notice explains that the Copyright Office is “interested in what has changed in the legal and business environments during the past few years that might be relevant to a resolution of the problem and what additional legislative, regulatory, or voluntary solutions deserve deliberation.” Comments are due by 5:00 p.m. EST on January 4, 2013. Reply comments are due by 5:00 p.m. EST on February 4, 2013.

Assuming it is not reversed by the Second Circuit, does the HathiTrust win on October 10, 2012 take some of the urgency out of the orphan works issue? After all, digitization for non-expressive use such as text mining and building a search engine has now been confirmed as fair use. In addition, digitization in the service of expanding access for the print-disabled is also now clearly fair use.

Or, does the HathiTrust win simply set the stage for addressing general purpose expressive access to orphan works? The district court in HathiTrust did not reach the merits of the copyright claims with respect to the universities’ Orphan Works Project and gave very little signal how it would decide such an issue.