Music Business

Federal Court tosses lawsuit claiming YouTube algorithms discriminate

This month, the tables turned in the court with YouTube accused of racial and sexual discrimination, and then a federal court was not so convinced. Tim Cushing, if Tech Dirt shares happened…

Op-Ed by Tim Cushing from Tech Dirt

This is a bit of an oddity. We’ve seen lots of lawsuits against social media services filed by bigots who are upset their accounts have been banned or their content removed because, well, they’re loaded with bigotry. You know, “conservative views,” as they say.

This one goes the other direction. It claims YouTube’s moderation algorithm is the real bigot here, supposedly suppressing content uploaded by non-white users. The potential class action lawsuit was filed in 2020, alleging YouTube suppressed certain content based on the creators’ racial or sexual identity, as Cal Jeffrey reports for Techspot (which, unlike far too many major news outlets, embedded the decision in its article).

“Instead of ‘fixing’ the digital racism that pervades the filtering, restricting, and blocking of user content and access on YouTube, Defendants have decided to double down and continue their racist and identity-based practices because they are profitable,” the lawsuit stated.

Whew, if true. YouTube has taken a lot of heat over the years for a lot of things related to its content moderation and recommendation algorithms (ranging from showing kids stuff kids shouldn’t see to the previously mentioned “censorship” of “conservatives”), but rarely has it been accused of being racist or bigoted. (That’s Facebook’s territory.)

While it’s not the first lawsuit of this particular kind we’ve covered here at Techdirt (that one was tossed in early July), this case certainly isn’t going to encourage more plaintiffs to make this sort of claim in court. (Well, one would hope, anyway…) While the plaintiffs do credibly allege something weird seems to be going on (at least in terms of the five plaintiffs), they fail to allege this handful of anecdotal observations is evidence of anything capable of sustaining a federal lawsuit.

From the decision [PDF]:

The plaintiffs in this proposed class action are African American and Hispanic content creators who allege that YouTube’s content-moderating algorithm discriminates against them based on their race. Specifically, they allege that their YouTube videos are restricted when similar videos posted by white users are not. This differential treatment, they believe, violates a promise by YouTube to apply its Community Guidelines (which govern what type of content is allowed on YouTube) “to everyone equally—regardless of the subject or the creator’s background, political viewpoint, position, or affiliation.” The plaintiffs thus bring a breach of contract claim against YouTube (and its parent company, Google). They also bring claims for breach of the implied covenant of good faith and fair dealing, unfair competition, accounting, conversion, and replevin.

YouTube’s motion to dismiss is granted. Although the plaintiffs have adequately alleged the existence of a contractual promise, they have not adequately alleged a breach of that promise. The general idea that YouTube’s algorithm could discriminate based on race is certainly plausible. But the allegations in this particular lawsuit do not come close to suggesting that the plaintiffs have experienced such discrimination.

As the court notes, the plaintiffs have been given several chances to salvage this suit. It was originally presented as a First Amendment lawsuit and was dismissed because YouTube is not a government entity. Amended complaints were submitted as the lawsuit ran its course, but none of them managed to surmount the lack of evidence the plaintiffs presented in support of their allegations.

Shifting the allegations to involve California contract laws hasn’t made the lawsuit any more winnable. Plaintiffs must show courts there’s something plausible about their allegations and what was presented in this case simply doesn’t cut it. Part of the problem is the sample size. And part of the problem is whatever the hell this is:

The plaintiffs rely primarily on a chart that purports to compare 32 of their restricted videos to 58 unrestricted videos posted by white users. To begin with, the plaintiffs have dug themselves into a bit of a hole by relying on such a small sample from the vast universe of videos on YouTube. The smaller the sample, the harder it is to infer anything other than random chance. But assuming a sample of this size could support a claim for race discrimination under the right circumstances, the chart provided by the plaintiffs is useless.

As a preliminary matter, 26 of the 58 comparator videos were posted by what the complaint describes as “Large Corporations.” The complaint alleges that “Large Corporation” is a proxy for whiteness. See Dkt. No. 144 at 26–31; Dkt. No. 144 at 4 (defining, without support or elaboration, users who Defendants identify or classify as white” as “including large media, entertainment, or other internet information providers who are owned or controlled by white people, and for whom the majority of their viewership is historically identified as white”). The plaintiffs have offered no principled basis for their proposition that corporations can be treated as white for present purposes, nor have they plausibly alleged that YouTube actually identifies or classifies corporations as white.

Drilling down the specifics doesn’t help the plaintiffs either.

In terms of content, many of the comparisons between the plaintiffs’ restricted videos and other users’ unrestricted videos are downright baffling. For example, in one restricted video, a plaintiff attributes his recent technical difficulties in posting videos on YouTube to conscious sabotage by the company, driven by animus against him and his ideas. The chart in the complaint compares this restricted video with a tutorial on how to contact YouTube Support. In another example, the chart compares a video where a plaintiff discusses the controversy surrounding Halle Bailey’s casting as the Little Mermaid with a video of a man playing—and playfully commenting on—a goofy, holiday-themed video game.

Other comparisons, while perhaps not as ridiculous as the previous examples, nonetheless hurt the plaintiffs. For instance, the chart compares plaintiff Osiris Ley’s “Donald Trump Makeup Tutorial” with tutorials posted by two white users likewise teaching viewers how to create Trump’s distinctive look. But there is at least one glaring difference between Ley’s video and the comparator videos, which dramatically undermines the inference that the differential treatment was based on the plaintiff’s race. About a minute and a half into her tutorial, Ley begins making references to the Ku Klux Klan and describing lighter makeup colors as white supremacy colors. Ley certainly appears to be joking around, likely in an effort to mock white supremacists, but this would readily explain the differential treatment by the algorithm.

And so it goes for other specific examples offered by the plaintiffs:

Only a scarce few of the plaintiffs’ comparisons are even arguably viable. For example, there is no obvious, race-neutral difference between Andrew Hepkins’s boxing videos and the comparator boxing videos. Both sets of videos depict various boxing matches with seemingly neutral voiceover commentary. The same goes for the comparisons based on Ley’s Halloween makeup tutorial. It is no mystery why Ley’s video is restricted—it depicts graphic and realistic makeup wounds. But it is not obvious why the equally graphic comparator videos are not also restricted. YouTube suggests the difference lies in the fact that one of the comparator videos contains a disclaimer that the images are fake, and the other features a model whose playful expressions reassure viewers that the gruesome eyeball dangling from her eye socket is fake.

But the content is sufficiently graphic to justify restricting impressionable children from viewing it.These videos are the closest the plaintiffs get to alleging differential treatment based on their race. But the complaint provides no context as to how the rest of these users’ videos are treated, and it would be a stretch to draw an inference of racial discrimination without such context. It may be that other similarly graphic makeup videos by Ley have not been restricted, while other such videos by the white comparator have been restricted. If so, this would suggest only that the algorithm does not always get it right. But YouTube’s promise is not that its algorithm is infallible. The promise is that it abstains from identity-based differential treatment.

And that’s part of the unsolvable problem. Content moderation at this scale can never be perfect. What these plaintiffs see as discrimination may be nothing more than imperfections in a massive system. A sample size of 32 videos compared to the hundreds of hours of video uploaded per second to YouTube isn’t large enough to even be charitably viewed as a rounding error. 

Because of the multitude of problems with the lawsuit, any Section 230 immunity raised by YouTube isn’t even addressed. The final nail in the lawsuit’s coffin involves the timing of accusations, some of which predate YouTube’s Community Guidelines updates that specifically addressed non-discriminatory moderation efforts. 

[T]hese alleged admissions [by YouTube executives] were made in 2017, four years before YouTube added its promise to the Community Guidelines. In machine-learning years, four years is an eternity. There is no basis for assuming that the algorithm in question today is materially similar to the algorithm in question in 2017. That’s not to say it has necessarily improved—for all we know, perhaps it has worsened. The point is that these allegations are so dated that their relevance is, at best, attenuated. 

Finally, these allegations do not directly concern any of the plaintiffs or their videos. They are background allegations that could help bolster an inference of race-based differential treatment if it were otherwise raised by the complaint. But, in the absence of specific factual content giving rise to the inference that the plaintiffs themselves have been discriminated against, there is no inference for these background allegations to reinforce.

As the court notes, there is the possibility YouTube’s algorithm behaves in a discriminatory manner, targeting non-white and/or non-heterosexual users. But an incredibly tiny subset of allegedly discriminatory actions that may demonstrate nothing more than a perception of bias is not enough to sustain allegations that YouTube routinely suppresses content created by users like these. 

Relying on a sampling this small is like looking out your window and deciding that because it’s raining outside of your house, it must be raining everywhere. And that’s the sort of thing courts aren’t willing to entertain as plausible allegations.

Share on:

1 Comment

  1. Earn money simply by working online. You are free to work from home whenever you choose. You may earn more than $600 per day working only 5 hours per day online. I made $21,300 with this in my spare time.
    .
    .
    Detail Here——————————————>>> http://simplework11.blogspot.com

Comments are closed.