I don’t know the correct level of content moderation by Facebook, Twitter, Google, or Amazon. And neither do you.
Sometimes I can pinpoint what looks to me like an obvious misstep: Facebook’s decision to block a New York Post story about Hunter Biden’s laptop in the weeks before the 2020 presidential election, for instance, or Amazon’s refusal to carry a small number of books about trans issues without adequately explaining its decision. Tweets containing threats of violence left up indefinitely while mere tasteless jokes get swiftly removed.
But I also know deciding what and whom to allow on your platform is a hard problem. Scale is hard: I know I’m not seeing millions of pieces of spam eliminated, bots blocked, irrelevant content filtered, duplicates removed. Consistency is hard: I know sometimes what’s in my feed is the work of a robot doing a good job following bad directions, and sometimes it’s a human being doing a bad job following good directions. The application of Hanlon’s razor is almost certainly called for in many cases of perceived bias: “Never attribute to malice that which is adequately explained by stupidity.”
The difficulty of this task hasn’t stopped everyone from elected politicians to think-tankers to pundits from looking for ways to punish tech companies for doing it wrong. These folks disagree about what is broken in the status quo, but the calls to action are no less strident for all that.
For every person arguing against moderation on the grounds of ideological bias, there is someone else pushing for more aggressive moderation to control rampant hate speech or “disinformation”—which can mean everything from objectively false claims to arguments that some users consider subjectively offensive. There are those who find the profit-making aspect of the whole industry distasteful, and there are those who fret about the difficulties faced by would-be competitors due to the sheer size of the companies in question.
The push to crack down on Big Tech is both bipartisan and fiercely politically tribal—the worst of both worlds.
The proposed solutions are numerous, and nearly all involve aggressive government action: break up some or all Big Tech firms via antitrust, remove longstanding liability protections by rewriting Section 230 of the Communications Decency Act, treat social media platforms as public utilities or common carriers with all the constraints that entails, reinstate the Fairness Doctrine, and much more.
The fact that a firm is large is not evidence that it is a monopoly. And as Elizabeth Nolan Brown details in “The Bipartisan Antitrust Crusade Against Big Tech”, pushes to employ antitrust remedies against tech companies have a checkered history at best. They are too often expensive, time-consuming, and reactionary efforts that end up lagging behind market solutions while actively harming consumers.
There is one clear monopoly in this ecosystem, however: the state. Any legislative or regulatory restriction on Big Tech will not be a triumph of the oppressed over the powerful. It will be yet another instance of the already powerful wielding the state’s machinery to compel private companies to do what they want, likely at the expense of their market competitors or political enemies. Such reforms are far more likely to be censorship than to reduce censorship, in the strictest sense.
It has become fashionable on both the left and the right to argue that Big Tech is now more powerful than a government or perhaps indistinguishable from one. Here is a list of things governments sometimes do if they dislike what you say or how you say it: lock you up, take your property, take your children, send you to die in a war. Here is a list of things tech companies sometimes do: delete your account.
Twitter, Facebook, Amazon, and Google do play a huge role in many people’s lives. To be kicked off a popular platform can be deeply unpleasant and unnerving. But the notion that political interference will result in broader access to a better product is naive at best and dangerous at worst.
On platforms that do any moderation or curation at all—both functions that are necessary for a pleasant or even comprehensible user experience—there are going to be many thousands of borderline calls each day, by humans and robots alike. And those decisions get more plentiful and complex over time. That, in turn, generates more room for error, and more consumer demand for clarity.
It was human beings—not robots—who decided to bar then–President Donald Trump from Twitter and Facebook in the days following the January 6 riot at the U.S. Capitol. Months later, at press time, an elaborately convened Facebook Oversight Committee delivered a swift kick to the can by declaring that Trump’s suspension from the social media site was justified while also noting that an indefinite suspension is not consistent with the company’s term of service.
I see why the former president and his supporters are enraged. Facebook did a terrible job of communicating what it was willing to tolerate from its users. Still, it’s not a First Amendment violation. It’s not proof of a trust that needs busting. And it’s certainly not a sign that Facebook is now more powerful than a government.
Ousted from Facebook and Twitter, Trump has set up his own site. This is a perfectly reasonable response to being banned—a solution that is available to virtually every American with access to the internet. In fact, for all the bellyaching over the difficulty of challenging Big Tech incumbents, the video-sharing app TikTok has gone from zero users to over a billion in the last five years. The live audio app Clubhouse is growing rapidly, with 10 million weekly active users, despite being invite-only and less than a year old. Meanwhile, Facebook’s daily active users declined in the last two quarters. And it’s worth keeping in mind that only 10 percent of adults are daily users of Twitter, hardly a chokehold on American public discourse.
Every single one of these sites is entirely or primarily free to use. Yes, they make money, sometimes lots of it. But the people who are absolutely furious about the service they are receiving are, by any definition, getting much more than they paid for. The results of a laissez-faire regime on the internet have been remarkable, a flowering of innovation and bountiful consumer surplus.
The question of the correct level of content moderation by Facebook, Twitter, Google, Amazon, and their would-be rivals is not a question that needs to be answered in the sphere of politics. We do not need to agree on a single answer. Which is good, because we never will.
from Latest – Reason.com https://ift.tt/3fYch6f
via IFTTT