Music Op-Ed

Are Big Tech platforms moderating the right content?

There is a balance between harmful and helpful content that Big Tech platform cannot always seem to differentiate when it comes to moderation.

Op-ed by CORY DOCTOROW of Electronic Frontier Foundation.

Pity the poor content moderator. Big Tech platforms expect their mods to correctly apply a set of rules to users in more than a hundred countries, in over a thousand languages. These users are clustered into literally millions of online communities, each with its own norms and taboos. 

What a task! Some groups will consider a word to be a slur, while others will use it as a term of affection. Actually, it’s even more confusing: some groups consider some words to be slurs when used by outsiders, but not by insiders, which means that a moderator has to understand which participants in a single group are considered insiders, and who is considered an outsider. 

Mods have to make this call in languages they speak imperfectly, or not at all, assisted by a machine translation of unknowable quality. 

Small wonder that trust and safety experts can’t agree when to remove content, when to label it, and when to leave it be. Moderation at scale is an impossible task. Moderators don’t just miss a torrent of vile abuse and awful scams, they also remove Black users’ discussions of racism for being racistsuspend users who report dangerous conspiracy-fodder for pushing conspiracies, punish scientists who debunk vaccine misinformation for spreading misinformation, block game designers ads’ because they contain the word “supplement,” and remove comments praising a cute cat as a “beautiful puss.”

Everyone hates the content moderation on Big Tech platforms. Everyone thinks they’re being censored by Big Tech. They’re right

Every community has implicit and explicit rules about what kinds of speech are acceptable, and metes out punishments to people who violate those rules, ranging from banishment to shaming to compelling the speaker to silence. You’re not allowed to get into a shouting match at a funeral, you’re not allowed to use slurs when addressing your university professor, you’re not allowed to explicitly describe your sex-life to your work colleagues. Your family may prohibit swear-words at Christmas dinner or arguments about homework at the breakfast table. 

One of the things that defines a community are its speech norms. In the online world, moderators enforce those “house rules” by labeling or deleting rule-breaking speech, and by cautioning or removing users. 

Doing this job well is hard even when the moderator is close to the community and understands its rules. It’s much harder when the moderator is a low-waged employee following company policy  at a frenzied pace. Then it’s impossible to do well and consistently.

It’s no wonder that so many people, of so many different backgrounds and outlooks, are unhappy with Big Tech platforms’s moderation choices.

Which raises the question: why are they still using Big Tech platforms?

Big Tech platforms enjoy “network effects”: the more people join an online community, the more reasons there are for others to sign up. You join because you want to hang out with the people there and then others join because they want to hang out with you.

This network effect also creates a “switching cost” – that’s the price you pay for leaving a platform behind. Maybe you’ll lose the people who watch your videos, or the private forum for people struggling with the same health condition as you, or contact with your distant relations, half a world away.

By all means, let’s try to make the platforms better, but let’s also make it less important, by giving people technological self-determination.

These people are why so many of us put up with the many flaws of major social media platforms. It’s not that  we value the glorious free speech of our harassers, nor that we want our views “fact-checked” or de-monetized by unaccountable third parties, nor that we want copyright filters banishing the videos we love, nor that we want juvenile sensationalism rammed into our eyeballs or controversial opinions buried at the bottom of an impossibly deep algorithmically sorted pile.

We tolerate all of that because the platforms have taken hostages: the people we love, the communities we care about, and the customers we rely upon. Breaking up with the platform means breaking up with those people. 

It doesn’t have to be this way. The internet was designed on protocols, not platforms: the principle of running lots of different, interconnected services, each with its own “house rules” based on its own norms and goals. These services could connect to one another, but they could also block one another, allowing communities to isolate themselves from adversaries who wished to harm or disrupt their fellowship.

In fact, there are millions of people energetically trying to create an internet that looks that way. The fediverse is a collection of free/open software projects designed to replace centralized servers like Facebook with decentralized alternatives that work in much the same way, but delegate control to the communities they serve. Groups of friends, co-ops, startups, nonprofits and others can host their own Mastodon or Diaspora instances and connect to as many of the other servers as will connect with them, based on their preferences and needs.

The fediverse is amazing, but it’s not growing the way many of us hoped. Even though millions of people claim to hate the moderation policies and privacy abuses of Facebook, they’re not running for the exits. Could it be that they secretly like life on Facebook?


That’s one theory. 

Another theory, one that requires much less of an imaginative leap, is that while people hate Facebook, they love the people they would have to leave behind if they quit it more.

Which raises an obvious possibility: what if we made it possible for people to leave Facebook without being cut off from their friends?

Enter “interoperability.”

Interoperability is the act of plugging something new into an existing product or service. Interop is why you can send email from a Gmail account to an Outlook account. It’s why you can load any website on any browser. It’s why you can open Microsoft Word files with Apple Pages. It’s why you can use an iPhone connected to Verizon to call an Android user on T-Mobile.

Interoperability is also why you can switch between these services. Throw away your PC and buy a Mac? No problem, Pages will open all the Word documents you created back when you were a Microsoft customer. Switch from Android to iPhone, or T-Mobile to Verizon? You can still call your friends and they can still call you – and they won’t even know that anything’s changed unless you tell them. 

Proposals in the US (the ACCESS Act) and the EU (the Digital Markets Act) aim to force the largest platforms to allow interoperability with their services. Though the laws differ in their specifics, in broad strokes they would both require platforms like Facebook (which claims it is now called “Meta”) to let startups, co-ops, nonprofits, and personal sites connect to it so that Facebook users can leave the service without leaving behind their friends.

Under these proposals, you could leave Facebook and set up or join a small service. That service would set its own moderation policies but also interoperate with Facebook. You could send messages to users and groups on Facebook, which would also be shared with people using other small services that were members of the same groups as you.

This moves moderation choices closer to users and further from Facebook. If the mods on your service allow speech that’s blocked on Facebook, you and the others on your service will see it, though it may be blocked by Facebook’s moderators and users there won’t see it. 

Likewise, if there’s some speech Facebook allows that you and your community don’t want to see, the mods on your service can block it, either by removing messages or blocking users from communicating with your server. 

Some people want to fix Big Tech platforms: get them to moderate better and more transparently. We get it. There’s lots of room for improvement there. We even helped draft a roadmap for improving moderation: the Santa Clara Principles.

But fixing Big Tech platforms is something that only works well. It fails really badly. If all the conversations you need to be a part of are on a platform that won’t fix itself and you’re being harmed by undermoderation or overmoderation, you’re stuck. 

There’s a better way. Interoperability puts communities in charge of their own norms, without having to convince a huge “trust and safety” department of a tech company – possibly a company in a different country, where no one speaks your language or understands your context – that they’ve missed some contextual nuance in their choices about what to leave up and what to delete. 

Frank Pasquale’s Tech Platforms and the Knowledge Problem poses two different approaches to tech regulation: “Hamiltonians” and “Jeffersonians” (the paper was published in 2018, and these were extremely zeitgeisty labels!). 

Hamiltonians favor “improving the regulation of leading firms rather than breaking them up,” while Jeffersonians argue that the “very concentration (of power, patents, and profits) in megafirms” is itself a problem, making them both unaccountable and dangerous.

That’s where we land. We think that technology users shouldn’t have to wait for Big Tech platform owners to have a moment of enlightenment that leads to its moral reform, and we understand that the road to external regulation is long and rocky, thanks to the oligopolistic power of cash-swollen, too-big-to-fail tech giants.

We are impatient. Too many people have already been harmed by Big Tech platforms’s bad moderation calls. By all means, let’s try to make the platforms better, but let’s also make it less important, by giving people technological self-determination. We all deserve to belong to online communities that get to make their own calls on what’s acceptable and what’s not. 

Share on: