Brexit Institute News

European Copyright Policy and the Digital Economy: The Fundamental Rights Dimension of Article 17 CDSM and the Challenge for the Member States

Kevin O’Sullivan (DCU)

In 2019, the European Union adopted its Copyright and Related Rights in the Digital Single Market Directive (CDSM) aimed at updating the European copyright acquis for the opportunities and challenges of the digital economy. Controversial from the outset, article 17 of the CDSM in particular has stirred a hornet’s nest because of the potential changes it brings about to European policy for protecting copyright in the digital economy.

In short, article 17 appears to mandate for the first time the deployment of upload filters on the platforms of what the CDSM terms ‘online content sharing providers’ – all large websites that make content available online, i.e. such as Youtube.com. These filters will scan uploads, with the potential of blocking any upload in real time deemed to include protected content by the filtering algorithm. 

Reaction has been fierce, with critics arguing that article 17 upends two decades of European policy in this area, threatening internet user rights such as freedom of expression and – for these reasons – its compatibility with the European legal order has been seen as questionable. 

Reflecting these concerns, in 2019, Poland launched legal proceedings under article 263 TFEU seeking annulment of article 17 in part or in full. On the 26th of April 2022, the Court of Justice delivered its ruling, saving article 17 from annulment and pushing the complicated question of implementation back to the member states. Before turning to the scope of the Polish challenge and the CJEU ruling, it is important to get a sense of why article 17 is so significant – and controversial. 

The Significance of Article 17 CDSM

Ever since the first file sharing network emerged in the late 1990s, policymakers on both sides of the Atlantic have struggled to design policies that afford adequate protection for copyright holders against online piracy. The challenge is to design policies that allow for redress for existing piracy while deterring future infringements in an environment where the online sharing community of internet users number in the millions, globally. 

When it comes to online enforcement, two avenues are available. The first is to target internet users directly in a bid to sanction some to  deter others. In vogue in some member states circa 2008 – most notably France under the Loi HADOPI – ‘user level’ enforcement of this type was (and remains) unpopular with European policymakers. Informed by a backlash shaped by internet user fundamental rights, attention has shifted to the second avenue for enforcement measures – online intermediaries. 

Broadly, these intermediaries are companies or websites that facilitate online piracy to varying degrees. For example, internet service providers, piracy facilitating indexing websites such as the PirateBay and – the target of the CDSM – popular content sharing websites. 

All communications relevant to online piracy at some point run through the infrastructure of these companies. ‘Choke points’ within that infrastructure can be used to deploy technical measures to enforce online copyright, i.e. filtering and blocking. The result is that this non-user level enforcement typically allows for control mechanisms that largely side-step questions of internet users’ fundamental rights. 

Over the course of the last decade Europe has led the way in requiring more of these intermediaries in supporting right holders from the bottom up, and now with the CDSM, from the top down. From the bottom up, national courts and the Court of Justice have been supportive of greater obligations for such intermediaries, shifting the understanding of their role from one of passive non-liability to one of accountability. 

Built into early legal frameworks from the start, the principle of accountability has only recently grown legs due to bottom up litigation such as Europe’s ‘blocking injunction’ but also thanks to changed realities in the digital economy. In short, a policy forbearance that sought to insulate the generative potential of online intermediaries is now increasingly untenable in an era in which these companies have grown, not only into corporate, but also social and cultural behemoths facilitating widespread copyright piracy.

Underpinned by this principle of accountability, the CDSM represents a watershed moment, not only for European intellectual property but more broadly in creating a new precedent for regulating key stakeholders and moderating certain content within the digital economy. This watershed moment is founded on what the CDSM is designed to do when protecting online copyright. 

Under article 17, what are now termed ‘online content sharing providers’ will no longer have the traditional protections under Article 14 of the eCommerce Directive and may be required to take a much more dramatic role in policing their services for online piracy.

One of the key reasons for the CDSM is said to be the value-gap that exists between advertising and other revenues online content sharing providers generate through the user generated content and the lack of commensurate remuneration for right holders. Keen to redress this imbalance, the target for European policymakers has been article 14 of the eCommerce Directive and the early minimal approach to regulation for such providers under the ‘notice and takedown regime’. 

Under this regime, the onus was on right holders to monitor the services of online content sharing providers and notify them if they detected their content hosted on the website in question. Once notified, providers could avoid liability by acting ‘expeditiously’ to take that content down.

In practice, placing the onus on right holders to monitor the services of third party platforms proved less efficient than may have been the case had the onus been on the providers themselves. Moreover, even if a substantial amount of notices were generated, removal required human intervention – a monumental task relative to scale for online providers required to act ‘expeditiously’. 

Over time, article 14 came to act as a useful shield for intermediaries where the understanding of ‘expeditiously’ dovetailed to the difficult realities of physically removing masses of pirated content. 

However, as these companies grew, so did the amount of pirated content made available, enabling copyright holders to lobby European policymakers to alter the status quo to remedy the ‘value gap’ in their favour. Two factors made this possible. The first was the advent of what are known as ‘automated content recognition’ software tools such as ‘AudibleMagic’ that can automatically filter and block protected content. The second was the deployment by YouTube of its equivalent ‘ContentID’ software – on a voluntary basis. 

That a market leader voluntarily embraced such technology strengthened the hand of the entertainment industry in lobbying for greater enforcement obligations to be imposed on these sharing providers. The result was article 17 CDSM, aimed – as Advocate General Saugmandsgaard Øe observes – at forcing the remaining market actors to deploy equivalent technology. The net goal being to shift the policing function of their services on to the providers themselves.

The Polish Challenge

The overriding concern for critics of article 17 is the potential scope of this policing function and the impact on internet users’ fundamental rights, in particular, the right to freedom of expression. This stems from the relative vagueness of article 17 itself. 

Under article 17, content providers are required to seek, in the first instance, authorisation from right holders to make their content available online, preferably by way of licensing agreement and to also cover use(s) made by internet users on the platform. 

Authorisation is important because article 17 establishes that such providers breach the ‘making available’ (article 3 InfoSoc Directive 2001) right when they give the public access to user generated content containing such works. Where, despite their ‘best efforts’ such providers are unable to obtain authorisation, they are required to take steps under article 17(4)(b) and (c) to avoid liability. 

Under 17(4)(b) they are required to ensure the unavailability of specific works or other protected content for which right holders have provided the relevant and necessary information and to do so ‘in accordance with high industry standards of professional diligence’. 

Under 17(4)(c) content providers are required to implement a nuanced version of the notice and takedown regime – now the ‘notice and stay down regime’. Once a sufficiently substantiated notice is received, content providers must disable access to, or remove, the content from their platform – in line with article 14 of the eCommerce Directive – but significantly, they must ensure this content stays down, i.e. future re-uploads must be prevented. 

Article 17(4)(b) and (c) form the crux of the Polish challenge. Neither make express reference to automated content recognition software, but rather ‘best efforts’. Nevertheless, the concern when it comes to both articles – but particularly article 17(4)(b) – is that the CDSM mandates a broadly based ex-ante system of preventative monitoring with little safeguards or controls to stop content providers from over blocking masses of uploads and impacting on internet users’ fundamental rights. 

The Polish challenge was therefore simple but powerful – article 17(4)(b) and (c) mandate automated preventative monitoring of internet user communications and violate internet user rights of freedom of expression under article 11 of the European Charter of Fundamental Rights (EUCFR). Article 17(4)(b) and (c) should therefore be annulled or where they cannot be severed from article 17, it should be annulled in full.

The Court of Justice Ruling

Rejecting the Polish challenge, the Court of Justice saved article 17 from annulment but the ruling has done little to assuage any of the complexity surrounding how article 17 will work in practice.

Confirming the views of scholars and agreeing with Advocate General Saugmandsgaard Øe, the Court took the view that ‘best efforts’ read in conjunction with ‘in accordance with high industry standards of professional diligence’ could only lead to the conclusion that article 17 mandates the use of automated content recognition software.

Reiterating the importance of protecting the means of disseminating communications as well as its content, and the unique role of content sharing providers in supporting the ability to access and impart information under the right to freedom of expression, the Court recognised article 17(4)(b) and (c) engage article 11 of the EUCFR. 

In this regard, such filtering technology acts as a prior review on the dissemination of communications and inherent in this process is a risk that lawful communications will be blocked. This is all the more acute because the process in question is automated, and in line with its decision in Schrems, the Court stressed the need for safeguards to provide guarantees against abuse. 

The question then was whether, in recognising that article 17(4)(b) and (c) engage article 11 of the EUCFR, there was a sufficient justification for the interference under article 52(1) of the EUCFR in line with the norms for determining the necessity and proportionality of any limitation of a right therein. 

The first issue to be determined was whether the scope of the preventative monitoring obligation was sufficiently prescribed by law. Drawing on case law of the European Court of Human Rights (ECtHR), the court observed that prior restraints were possible for protected rights under article 11 EUCFR – as is the case under article 10 of the European Convention of Human Rights – but require a particularly tight legal framework (ECtHR: Yildirim). 

Although article 17(4)(b) and (c) is only prescriptive insofar that ‘best efforts’ be employed by content sharing providers, the Court – again drawing on the ECtHR – confirmed the view that the terms of the limitation on a fundamental right could be cast in such open terms to keep pace with changing circumstances (ECtHR: Delfi).

Moreover, the ‘best efforts’ phrasing also conforms to the Court of Justice’s own case law surrounding intermediary obligations – in particular, that intermediaries are free to determine how they comply with obligations under the broader copyright acquis relative to their resources (Telekabel).

The next issue was similarly clear cut for the Court – that article 17 is necessary and appropriate in line with the requirements of article 52(1) EUCFR. To this end, article 17 is aimed at facilitating the protection of intellectual property and achieving a fairer marketplace for right holders. In view of the fact alternatives, i.e. traditional enforcement measures that do not rely on automated content recognition, are not as effective, article 17 satisfies article 52(1) vis. necessity. The limitation to internet user rights under article 11 EUCFR is therefore justified under article 52(1) EUCFR as a necessary limitation to protect the rights and freedoms of others. 

The determinative question then was whether the limitation under article 17(4)(b) and (c) engaged the ‘essence’ of article 11 EUCFR. If so, then the limitation would fall foul of article 52(1) EUCFR on this basis – even if it can be deemed appropriate and necessary. 

Whether the essence of a right is engaged comes down to a proportionality evaluation of the limitation in question relative to that right. In this context, the right to freedom of expression is shaped not only by the right to receive and impart information generally, but also the limitations and exceptions to copyright under article 5 of the InfoSoc Directive 2001

As the Court confirmed, if an automated software solution cannot sufficiently distinguish between lawful uses of copyrighted material within a communication, then by default, such a software solution would not comply with article 11 EUCFR, read in conjunction with article 52(1) – applying the seminal 2011 case Scarlet v Sabam. 

Nevertheless, focussing on the safeguards present in article 17, the Court was of the view that the essence of the right to freedom of expression was not undone by article 17(4)(b) and (c). In this regard, the Court focussed on article 17(7) through to article 17(9). 

Article 17(7) mandates that in cooperating with each other, right holders and content providers are not to block lawful communications, i.e. those that make use of content relative to the limitations and exceptions under article 5 of the InfoSoc Directive. According to the Court, what is important here is that article 17(7) does not set the threshold at ‘best efforts’ but rather prescribes a specific result.

In turn, a narrow interpretation to filtering under article 17(4) is preferred. Right holders and content sharing providers alike are therefore on notice that blocking lawful communications of this type will engage article 11 EUCFR, read in conjunction with article 5 of InfoSoc Directive, and fall outside the liability exemption of article 17 CDSM.

Article 17(8) provides that no general monitoring should occur in order to satisfy Article 17. General monitoring is prohibited under article 15 of the eCommerce Directive and is considered a cornerstone of internet governance in the European Union. 

In line with a prohibition on general monitoring, i.e. that all communications would be scanned indiscriminately and blocked once a data match occurs, the Court ruled that no filtering can prevent the upload of content that requires an independent assessment as to whether it is lawful or not.

In turn, with the onus on right holders to provide relevant and necessary information to this end, the Court was of the view that the blocking of content by sharing providers absent this information was unlikely, serving as a further safeguard.  

Moreover, article 17(9) requires the Member States to ensure that content sharing providers put in place an effective and expeditious complaint and redress mechanism and to ensure that internet users have access to out-of-court redress mechanisms that can provide impartial redress in the event of disputes. And, to ensure that traditional judicial remedies are also available to internet users. 

Taken together, on these bases the Court was of the view that the European legislator included sufficient safeguards to protect the essence of the right to freedom of expression under article 11 EUCFR, satisfying article 52(1) EUCFR. Moreover, these safeguards were considered to be further enhanced by the requirements for stakeholder dialogue under article 10 and the requirement that content providers make available information on their practices to give effect to article 17(4)(b) and (c) under article 17(8). 

Implementing Article 17 CDSM – Back to the Member States

Although the Court saved article 17 from annulment, the complex task of how to implement the provision has simply been kicked back to the member states. As such, while the Court is correct that the article contains safeguards against abuse, it is another matter entirely as to how these safeguards should be implemented in practice. 

Indeed, while article 17(7) and 17(9) are clear that no implementation of article 17(4)(b) and (c) should prejudice the legitimate uses of works that internet users have under European law, it is difficult to see how this can be adequately implemented under a provision that mandates a software solution. 

In this regard, the software in question will be required to make a value judgement in line with article 17(4)(b) in particular, that a use is either lawful or not on an ex-ante basis. The threshold of this task has been inflated by two factors. The first is the recognition by the Court that the limitations and exceptions under article 5 of the InfoSoc Directive 2001 should now be considered as internet user fundamental rights in this context, i.e. read in conjunction with article 11 EUCFR. 

The second, is that while certain uses are now mandatory in member states as a result of article 17(7) CDSM – namely quotation, criticism, review, caricature, parody and pastiche – there remains fragmentation across the single market as to the implementation of all the limitations and exceptions under article 5 of the InfoSoc Directive 2001.  Moreover, in some member states certain works are in the public domain entirely, while in others, they fall within the legal framework.  

What the CDSM sets up then is potentially 27 different regimes as to what constitutes lawful use and the need for algorithms to be sophisticated enough to make the judgement call beyond simple data matching which would lead to general monitoring. 

From a fundamental rights perspective, the CJEU’s ruling is to be welcomed in opting for a restrictive approach for giving effect to article 17 such that only absolute minimal filtering will be permitted in line with article 17(7) and 17(9) CDSM, i.e. only manifestly infringing content may be blocked. 

Nevertheless, when it comes to implementing article 17, the task facing the member states remains complex. In the first instance, the parameters of lawful targeted filtering in line with CJEU’s ruling needs to be established – in a regulatory environment where the technology lags behind the ambition. 

In the second, the recognition that the complaint and redress mechanism and out of court settlement procedures are instrumental to the proportionality of article 17 invites questions as to oversight. The CDSM is silent in this regard, but it is apparent that in recognising the rights-enhancing role of both redress apparatuses, a public oversight infrastructure will be required. 

As a result of the Court’s ruling, article 17 will not lead to the sort of ex-ante upload filters that critics feared. Nevertheless, rather than ending the debate as to implementation, the Court has simply added a new dimension of complexity as to the demands of article 17 vis. national implementation. 

Indeed, the ‘cut and paste’ approach adopted by a number of member states – Ireland included – is now glaringly insufficient. Welcome to digital constitutionalism in practice!

 

Kevin O’Sullivan is an Assistant Professor in Private and Intellectual Property Law at  the School of Law and Government at DCU.

The views expressed in this blog reflect the position of the author and not necessarily that of the Brexit Institute Blog.