Platform Immunity and “Platform Blocking and Screening of Offensive Material”

In an earlier post, I talked about the big picture of 47 U.S.C. § 230, the federal statute that broadly protects social media platforms (and other online speakers) from lawsuits for the defamatory, privacy-violating, or otherwise tortious speech of their users. Let’s turn now to some specific details of how § 230 is written, and in particular its key operative provision:

(c) Protection for “Good Samaritan” blocking and screening of offensive material

(1) Treatment of publisher or speaker

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

(2) Civil liability

No provider or user of an interactive computer service shall be held liable on account of—

(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or

(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1). [Codifier’s note: So in original [as enacted by Congress]. Probably should be “subparagraph (A).”]

Now recall the backdrop in 1996, when the statute was enacted. Congress wanted both to promote the development of the Internet, and to protect users from offensive material. Indeed, § 230 was part of a law named “the Communications Decency Act,” which also tried to ban various kinds of online porn; but such a ban was clearly constitutionally suspect, and indeed in 1997 the Court struck down that part of the law.

One possible alternative to a ban was encouraging service providers to block or delete various materials themselves. But a then-recent court decision, Stratton Oakmont v. Prodigy, held that service providers that engage in such content removal become “publishers” who are more liable for tortious speech (such as libel) that they don’t remove. Stratton Oakmont thus created a disincentive for service provider content control, including content control of the sort that Congress liked.

What did Congress do?

[1.] It sought to protect “blocking and screening of offensive material.”

[2.] It did this primarily by protecting “interactive computer service[s]”—basically anyone who runs a web site or other Internet platform—from being held liable for defamation, invasion of privacy, and the like in user-generated content whether or not those services also blocked and screened offensive material. That’s why Twitter doesn’t need to fear losing lawsuits to people defamed by Twitter users, and I don’t need to fear losing lawsuits to people defamed by my commenters.

[3.] It barred such liability for defamation, invasion of privacy, and the like without regard to the nature of the blocking and screening of offensive material (if any). Note that there is no “good faith” requirement in subsection (1).

So far we’ve been talking about liability when a service doesn’t block and screen material. (If the service had blocked an allegedly defamatory post, then there wouldn’t be a defamation claim against it in the first place.) But what if the service does block and screen material, and then the user whose material was blocked sues?

Recall that in such cases, even without § 230, the user would have had very few bases for suing. You generally don’t have a legal right to post things on someone else’s property; unlike with libel or invasion of privacy claims over what is posted, you usually can’t sue over what’s not posted. (You might have breach of contract claims, if the service provider contractually promised to keep your material up, but service providers generally didn’t do that; more on that, and on whether § 230 preempts such claims, in a later post.) Statutes banning discrimination in public accommodations, for instance, generally don’t apply to service providers, and in any case don’t generally ban discrimination based on the content of speech.

Still, subsection (2) did provide protection for service providers even against these few bases (and any future bases that might be developed)—unsurprising, given that Congress wanted to promote “blocking and screening”:

[4.] A platform operator was free to restrict material that it “considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”

  1. The material doesn’t have to be objectionable in some objective sense—it’s enough that the operator “consider[ it] to be” objectionable.
  2. The material isn’t limited to particular speech (such as sexually themed speech): It’s enough that the operator “consider[ it] to be” sexually themed or excessively violent or harassing or otherwise objectionable. If the categories were all of one sort (e.g., sexual), then “otherwise objectionable” might be read, under the legal principle of ejusdem generis, as limited to things of that sort: “when a generic term follows specific terms, the generic term should be construed to reference subjects akin to those with the specific enumeration.” But, as the Ninth Circuit recently noted,
  3. [T]he specific categories listed in § 230(c)(2) vary greatly: Material that is lewd or lascivious is not necessarily similar to material that is violent, or material that is harassing. If the enumerated categories are not similar, they provide little or no assistance in interpreting the more general category…. “Where the list of objects that precedes the ‘or other’ phrase is dissimilar, ejusdem generis does not apply[.]” …
  4. What’s more, “excessively violent,” “harassing,” and “otherwise objectionable” weren’t defined in the definitions section of the statute, and (unlike terms such as “lewd”) lacked well-established legal definitions. That supports the view that Congress didn’t expect courts to have to decide what’s excessively violent, harassing, or otherwise objectionable, because the decision was left for the platform operator.

[5.] Now this immunity from liability for blocking and screening was limited to actions “taken in good faith.” “Good faith” is a famously vague term.

But it’s hard to see how this would forbid blocking material that the provider views as false and dangerous, or politically offensive. Just as providers can in “good faith” view material that’s sexually themed, too violent, or harassing as objectionable, so I expect that many can and do “in good faith” find to be “otherwise objectionable” material that they see as a dangerous hoax, or “fake news” more broadly, or racist, or pro-terrorist. One way of thinking about is to ask yourself: Consider material that you find to be especially immoral or false and dangerous; all of us can imagine some. Would you “in good faith” view it as “objectionable”? I would think you would.

What wouldn’t be actions “taken in good faith”? The chief example is likely actions that are aimed at “offensive material” but rather that are motivated by a desire to block material from competitors. Thus, in Enigma Software Group USA v. Malwarebytes, Inc., the Ninth Circuit reasoned:

Enigma alleges that Malwarebytes blocked Enigma’s programs for anticompetitive reasons, not because the programs’ content was objectionable within the meaning of § 230, and that § 230 does not provide immunity for anticompetitive conduct. Malwarebytes’s position is that, given the catchall, Malwarebytes has immunity regardless of any anticompetitive motives.

We cannot accept Malwarebytes’s position, as it appears contrary to CDA’s history and purpose. Congress expressly provided that the CDA aims “to preserve the vibrant and competitive free market that presently exists for the Internet and other interactive computer services” and to “remove disincentives for the development and utilization of blocking and filtering technologies.” Congress said it gave providers discretion to identify objectionable content in large part to protect competition, not suppress it. In other words, Congress wanted to encourage the development of filtration technologies, not to enable software developers to drive each other out of business.

The court didn’t talk about “good faith” as such, but its reasoning would apply here: Blocking material ostensibly because it’s offensive but really because it’s from your business rival might well be seen as being not in good faith. But blocking material that you really do think is offensive to many of your users (much like sexually themed or excessively violent or harassing material is offensive to many of your users) seems to be quite consistent with good faith.

I’m thus skeptical of the argument in President Trump’s “Preventing Online Censorship” draft Executive Order that,

Subsection 230 (c) (1) broadly states that no provider of an interactive computer service shall be treated as a publisher or speaker of content provided by another person. But  subsection 230(c) (2) qualifies that principle when the provider edits the content provided by others. Subparagraph (c)(2) specifically addresses protections from “civil liability” and clarifies that  a provider is protected from liability when it acts in “good faith” to restrict access to content that it considers to be “obscene, lewd, lascivious, filthy, excessively violent, harassing or otherwise objectionable.” The provision does not extend to deceptive or pretextual actions restricting online content or actions inconsistent with an online platform’s terms of service. When an interactive computer service provider removes or restricts access to content and its actions do not meet the criteria of subparagraph (c)(2)(A), it is engaged in editorial conduct. By making itself an editor of content outside the protections of subparagraph (c)(2)(A), such a provider forfeits any protection from being deemed a “publisher or speaker” under subsection 230(c)(1), which properly applies only to a provider that merely provides a platform for content supplied by others.

As I argued above, § 230(c)(2) doesn’t qualify the § 230(c)(1) grant of immunity from defamation liability (and similar claims)—subsection (2) deals with the separate question of immunity from liability for wrongful blocking or deletion, not with liability for material that remains unblocked and undeleted.

In particular, the “good faith” and “otherwise objectionable” language doesn’t apply to § 230(c)(1), which categorically provides that, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider,” period. (Literally, period.)

Removing or restricting access to content thus does not make a service provider a “publisher or speaker”; the whole point of § 230 was to allow service providers to retain immunity from claims that they are publishers or speakers, regardless of whether and why they “block[] and screen[] offensive material.”

Now this does leave the possibility of direct liability for “bad-faith” removal of material. A plaintiff would have to find an affirmative legal foundation for complaining that a private-company defendant has refused to let the plaintiff use the defendant’s facilities—perhaps as Enigma did with regard to false advertising law, or as someone might do with regard to some antitrust statute. The plaintiff would then have to show that the defendant’s action was not “taken in good faith to restrict access to or availability of material that the provider … considers to be … objectionable, whether or not such material is constitutionally protected.”

My sense is that it wouldn’t be enough to show that the defendant wasn’t entirely candid in explaining its reasoning. If I remove your post because I consider it lewd, but I lie to you and say that it’s because I thought it infringed someone’s copyright (maybe I don’t want to be seen as a prude), I’m still taking action in good faith to restrict access to material that I consider lewd; likewise as to, say, pro-terrorist material that I find “otherwise objectionable.” To find bad faith, there would have to be some reason why the provider wasn’t in good faith acting based on its considering material to be objectionable—perhaps, as Enigma suggests, evidence that the defendant was just trying to block a competitor. (I do think that a finding that the defendant breached a binding contract should be sufficient to avoid (c)(2), simply because § 230 immunity can be waived by contract the way other rights can be.)

But in any event, the enforcement mechanism for such alleged misconduct by service providers would have to be a lawsuit for wrongful blocking or removal of posts, based on the limited legal theories that prohibit such blocking or removal. It would not be a surrender of the service provider’s legal immunity for defamation, invasion of privacy, and the like based on posts that it didn’t remove.

from Latest – Reason.com https://ift.tt/2X8xyAK
via IFTTT

Leave a Reply

Your email address will not be published.