Music Tech

DMCA’s Takedown Procedure Is A Total Mess. Are Automated Takedowns To Blame?

2000px-Achtung.svgAs the Copyright Office seeks public comment on how to reform copyright law, DMCA takedown notices have been gaining particular attention, although many of the suggestions for how to improve them involve automated takedowns, which many say would actually make the problem worse.

_________________________________

By Mike Masnick of Techdirt

Both Congress and the Copyright Office continue to explore possible ways to reform copyright laws, and one area of interest to a lot of people is reforming the whole "notice and takedown" process in the DMCA. The legacy players have been pushing for a ridiculously stupid concept they're calling "notice and staydown" in which they argue that once there's a notice for a particular piece of content, a platform needs to proactively block any copies of that content from ever being uploaded again. This is dumb and dangerous for a variety of reasons, starting with the fact that it would place tremendous burdens on smaller players, while locking in the more dominant large platforms that can build or buy systems to handle this. But, even more importantly, copyright infringement is extremely context dependent. The same content may be infringing in one context, while protected fair use in another. But a notice and staydown process would completely wipe out the fair use possibilities, and potentially violate the First Amendment (remember, the Supreme Court itself has declared fair use to be the "safety valve" that allows copyright law to fit with the First Amendment).

On the flipside, plenty of folks on the platform side as well as the free speech and civil liberties communities are quite reasonably worried about the widespread abuse of the DMCA takedown process to censor content broadly not for any legitimate reason at all. We've written tons of stories of DMCA takedown abuse for the purpose of censorship, and there remains no real punishment for filing false DMCA notices. The part of the law that covers false notices has been effectively neutered by the courts so far.

TC-13397-MainIconAs we noted earlier this year, the Copyright Office is asking for public comment on the whole notice and takedown situation, and those comments are due on April 1st. I'm sure we'll see a bunch of submissions be posted publicly around then, but one that has already come in is from ITIF, a DC-based think tank that pretends to be pro-innovation, but has always been incredibly anti-internet on a number of issues. It was the think tank that more or less came up with the ideas that became SOPA. Its somewhat laughable submission claims that abuse of the DMCA takedown process is rare, but does so in a totally misleading way:

While some notable false negatives generate headlines, the occurrence of this type of error is actually quite rare. A study that analyzed the number of section 512 notices sent by the U.S. film industry during six months in 2013, found that of the 25 million notices these companies sent, the relevant online intermediary only received eight counter notices. A more recent review of the notices sent to Twitter shows a similarly low numbers of counter notices. From July to December 2015, Twitter received 35,000 notices, but only 121 counter notices. And during the prior six months, Twitter received 18,000 notices and only 27 counter notices.

But, arguing that the only way to see abuse is by counting counternotices is ridiculous. And it suggests either a deliberate attempt to mislead by the authors of the ITIF submission, or a complete ignorance of the DMCA takedown and counternotice process in reality. In so many cases of false takedowns, people don't file counternotices. The process is made to appear complicated and dangerous. Many platforms make it clear (correctly) that filing a counternotice can lead to you being sued in federal court, where you may face statutory damages awards up to $150,000 per work infringed. But the folks at ITIF are apparently so out of touch that they don't even realize that this might scare off the vast, vast, vast majority of people who are the receiving end of bogus takedown notices. I personally know a bunch of people who have received such notices and have no interest in going the counternotice path. The vast majority of abusive takedown stories that we get sent at Techdirt involve people who don't want to file a counternotice (and often, they hope that press coverage will fix the situation instead). To focus solely on counternotices as a metric for abusive or mistaken takedowns is ludicrous and, once again, should cement ITIF as an organization that has no credibility on this issue.

Even worse, the ITIF submission encourages more automated takedown efforts, saying that such systems are wonderful, and brushing off any concerns about false takedowns:

The best way to minimize the cost of sending and responding to so many notices of infringement is to use automated techniques. In particular, online service providers can use automated filtering systems that check content as it is uploaded to stop a user from reposting infringing content

In short: automated notice-and-staydown. Incredibly, in support of this, they point to a paper from researchers Joe Karaganis and Jennifer Urban entitled The Rise of the Robo Notice, which actually takes issue with such automated takedowns. And, with a bit of perfect timing, Urban and Karaganis, along with Brianna Schofield, have just released a new paper on the effectiveness of notice and takedown under the DMCA, that basically demonstrates how totally ignorant the folks at ITIF are in arguing for such takedowns (and in claiming that it rarely goes wrong).

The research report is long and detailed, and covers a variety of different areas, but on the issue of automated takedowns, they dug deeply into the numbers, and the abusive and faulty nature of automated takedowns is not minimal, as ITIF suggests, but massive and hugely problematic:

One in twenty-five of the takedown requests (4.2%) were fundamentally flawed because they targeted content that clearly did not match the identified infringed work. This extrapolates to approximately 4.5 million requests suffering from this problem across the entire six-month dataset.

Nearly a third of takedown requests (28.4%) had characteristics that raised clear questions about their validity, based solely on the facial review and comparisons we were able to conduct. Some had multiple potential issues. While these requests cannot be described as categorically invalid without further investigation, they suggest that a very substantial number of requests in the six-month dataset—approximately 30.1 million—would benefit from human review.

This “questionable” set included requests that raised questions about compliance with the statutory requirements (15.4%), potential fair use defenses (7.3%), and subject matter inappropriate for DMCA takedown (2.3%), along with a small handful of other issues.

In other words, problems with automated takedown systems are not a tiny issue. It's an epidemic that impacts millions upon millions of pieces of content, many of which may be protected speech that gets shut down due to abuse of the law. Later in the report, they show in graphical form the types of errors they saw in going through the Lumen Database (formerly the Chilling Effects database):

Notice that these are not small numbers we're talking about, but millions of faulty notices. So much for ITIF's claims that it's a barely noticeable issue. As the report notes: "the notice and takedown process, as practiced in our cohort of notices, imposes a high burden on those mistaken targets."

The report then condemns the use of automated systems, completely contradicting ITIF's recommendations (which, amusingly, were based on a misrepresentation of the same authors' work):

The rise of mass notice sending via automated systems raises immediate questions of accuracy and due process. Human scrutiny of underlying claims necessarily decreases when by-hand infringement detection, noticing, and review are replaced by automated systems. Understanding how this may affect the accuracy of takedowns was a major question in our research.

We found reason to be concerned when human review is replaced with a high degree of automation. The automated notices we examined in Study 2 were, in the main, sent by sophisticated rightsholders (or their agents) with a strong knowledge of copyright law, yet nearly a third of the notices raised questions about their validity, and one in twenty-five apparently targeted the wrong material entirely.

They also note that the lack of any consequences for those who send bogus notices means that there is no check on such notices and they're only likely to increase over time.

Furthermore, the authors note that the due process concerns raised by this system are very serious. In fact, they point out that (contrary to ITIF's silly interpretation) the lack of counternotices is a sign that the system is broken:

As a procedural matter, material that is targeted by a takedown request is often removed before the target is given the opportunity to respond; this was confirmed in interviews with OSPs and rightsholders. Yet all available evidence suggests that counter notices are simply not used. It is indicative of the problem that the most memorable uses of counter notices for our rightsholder respondents were a few bad-faith, bogus counter notices from overseas pirates. Given the high numbers of apparently unchallenged takedown mistakes that showed up in our quantitative studies, we would expect to see higher numbers of appropriate, good-faith counter notices if the process were working as intended.

Unfortunately, under current practice, there seems to be little chance of this changing. Study 1 OSPs described hesitating to encourage targeted users to send counter notices, even when it seemed appropriate, for fear of creating liability risk for targets and themselves. Unbalanced liability standards—fear of suit by copyright holders but not users—creates incentives for OSPs to take down material. Moreover, some of the main targets of large-scale requests—search services—have no service relationship with targets or any duty to inform them that links are being removed, making it highly unlikely that the target would know to send a counter notice. Further, as we discuss in recommendations, section 512 currently leaves unclear whether search engines are protected for putback like hosting entities, exacerbating the challenge. Overall, the counter-notice process’s procedural features make it difficult for OSPs to use it as intended.

The report also includes some fairly minor modifications to the law — including a stronger "under penalty of perjury" requirement (right now it really only applies to misrepresenting if you hold the rights to the copyright in question, rather than the rest of the notice), allowing service providers to put content back up immediately after receipt of a counternotice, and (most importantly) giving some teeth to the part of the law, 512(f), that covers bogus notices. They also suggest serious statutory damages reform, so that people issuing counternotices aren't scared off by that stupid $150,000 statutory damages number that always gets thrown around.

There's a lot more in the report, and it's well worth reading. Hopefully the Copyright Office (and Congress!) pay attention as they consider what to do about the notice and takedown process.

Share on: