How Should Facebook (and Twitter, and YouTube, and…) Decide What Speech To Allow?

Everywhere you turned in 2018, Facebook, Twitter, and other social media platforms were in the news for policing speech in ways that either delighted or infuriated users. YouTube refused to host certain sorts of videos altogether and “demonetized” others (meaning the channels couldn’t run ads and earn revenue). Patreon, a service that allows people to pay creators directly, recently deplatformed Sargon of Akkad, a controversial anti-feminist, which sparked a public exodus by a number of “Intellectual Dark Web” folks, such as Jordan Peterson, Sam Harris, and Dave Rubin.

As a legal and practical matter, there seems to be no question that such services are free to disallow pretty much whatever content they choose. Earlier in the year, YouTube (owned by Google, which is in turn part of Alphabet) won a lawsuit brought by Prager U that charged the site was minimizing the reach of conservative points of view, if not outright censoring them. The crux of that case turned on whether YouTube should be treated as the equivalent of a government-licensed broadcast radio or television network and thus have to provide equal distribution to all participants. The ruling was unequivocal that YouTube (and, by extension, other social media services) are private businesses. From The Hollywood Reporter‘s writeup of the ruling:

Since the First Amendment free speech guarantee guards against abridgment by a government, the big question for U.S. District Court Judge Lucy Koh is whether YouTube has become the functional equivalent of a “public forum” run by a “state actor” requiring legal intervention over a constitutional violation.

Koh agrees with Google that it hasn’t been sufficiently alleged that YouTube is a state actor as opposed to a private party.

“Plaintiff does not point to any persuasive authority to support the notion that Defendants, by creating a ‘video-sharing website’ and subsequently restricting access to certain videos that are uploaded on that website, have somehow engaged in one of the ‘very few’ functions that were traditionally ‘exclusively reserved to the State,'” she writes. “Instead, Plaintiff emphasizes that Defendants hold YouTube out ‘as a public forum dedicated to freedom of expression to all’ and argues that ‘a private property owner who operates its property as a public forum for speech is subject to judicial scrutiny under the First Amendment.'”

That settles the large legal issue: The platforms can decide what stays and goes. But most peeks into how they actually make those decisions are troubling. In August, The New York Times sat in with Twitter’s “safety team” as it wrestled with banning Alex Jones and Infowars. (They eventually got bounced, albeit later than from Facebook, YouTube, and Spotify.) All agreed that “dehumanizing language” should not be tolerated, but the devil is in the details; accounts often get suspended or banned in ways that seem arbitrary or simply wrong. A few days ago, the Times reported on “Facebook’s secret rule book for global political speech.” The platform has about 7,500 moderators that make decisions, often about situations about which they are mostly ignorant and often using autotranslate services because they don’t speak the languages being used.

The Times was provided with more than 1,400 pages from the rulebooks by an employee who said he feared that the company was exercising too much power, with too little oversight—and making too many mistakes.

An examination of the files revealed numerous gaps, biases and outright errors. As Facebook employees grope for the right answers, they have allowed extremist language to flourish in some countries while censoring mainstream speech in others….

The Facebook employees who meet to set the guidelines, mostly young engineers and lawyers, try to distill highly complex issues into simple yes-or-no rules. Then the company outsources much of the actual post-by-post moderation to companies that enlist largely unskilled workers, many hired out of call centers.

Those moderators, at times relying on Google Translate, have mere seconds to recall countless rules and apply them to the hundreds of posts that dash across their screens each day. When is a reference to “jihad,” for example, forbidden? When is a “crying laughter” emoji a warning sign?

It’s easy to sympathize with the in-house censors since the work they are tasked with is both unending and overwhelming. And there seem to be more and more calls to police speech, both from social justice warriors on the left and conservative trolls on the right (who are quick to say they’ll report speech they find offensive even as they deride progressives as snowflakes who need to toughen up).

This is a disturbing development, and I think it should bother all libertarians. Yes, these services have the right to ban people or treat them unequally, and yes, in many cases, Facebook, Twitter, et al are responding to consumer demand by shutting down this or that person, page, or account. But I think basically any speech short of true threats should be tolerated. Even discerning what counts as a legitimate call for violence will create more than enough work for all the censors in the world. But the public sphere of debate, discussion, and disagreement works better in a setting that is more open rather than more closed. That holds true for the internet as a whole, but also on specific social-media platforms.

There’s a doctrinaire market-friendly case to be made that if the platforms become too constrained and stultified, disgruntled users will create compelling alternatives. (The late, not-great, right-wing site Gab is one attempt limping along after being refused service by web-hosting company GoDaddy and online payment service PayPal.) I buy that argument to a large degree, but we’re losing a larger culture of free speech, pluralism, and tolerance with every purge of accounts on every platform. This month it’s Sargon of Akkad or Alex Jones, but in 2019, who knows who it will be? The initial beauty of most of these services was precisely that they allow users to tailor their experience so you don’t need to bump up against the Alex Joneses of the world unless you want to. Individuals can mute, block, and ignore people that bother them. We now seem to be at a place culturally where people think that just isn’t enough anymore. A decade-plus ago, one of the big fears about user-controlled newsfeeds was precisely that individuals would create what MIT’s Nicholas Negroponte called “The Daily Me,” a completely personalized newspaper full of content that you actually wanted. Critics such as Cass Sunstein fretted that such a turn of events would undermine the “neglected requirements of a system of free expression: unanticipated, unchosen encounters and a range of shared experiences.”

It was only a few years ago that such services were rightly celebrated for the roles they played in facilitating and enabling the Arab Spring: “We use Facebook to schedule the protests” an activist was quoted by Mic, “and [we use] Twitter to coordinate, and YouTube to tell the world.” That seems like a different planet, doesn’t it? As we slide into 2019, Sunstein’s fears are more likely to be true at the platform level rather than the individual one.

Related: Prior to celebrating Reason‘s 50th anniversary in November, we hosted a debate that asked, “Should Facebook and Twitter Censor Themselves?” The participants were Renegade University founder Thaddeus Russell and lawyer and blogger Ken White of Popehat. Take a look or a listen:

from Hit & Run http://bit.ly/2QZdIqr
via IFTTT

Leave a Reply

Your email address will not be published. Required fields are marked *