“Corona and the Constitution” on Zoom, Plus SCOTUS Jeopardy!

Yesterday I gave a presentation to the Chicago Federalist Society Chapter on “Corona and the Constitution,” via Zoom. I’ve embedded the video below. Here, I’d like to offer some thoughts about giving presentations to Lawyer groups over Zoom.

First, for any group greater than 10 or 15 attendees, I would recommend Webinar mode. Through this mode, attendees keep their cameras and microphones off by default. Attendees do not see the Brady Bunch grid. Attendees only see the presenters, full screen. This approach avoids the awkward moment where someone forgets to mute his mic, or inadvertently turns her camera on at the wrong moment.

Second, you should still try to find ways for attendees to speak and get involved. Tonight I tried SCOTUS Jeopardy! I created a powerpoint of ten Supreme Court trivia questions. The hearty lawyers of the Chicago chapter got 9 out of the 10 questions correct. You can see the questions here. Here is the question that stumped everyone. The fact that it stumped everyone reaffirms that he is the most underrated framer.

After I read the question, I asked people in attendance to raise their blue hands once they figured out the answers. Zoom automatically sorts people based on when they raise their hands, so the system provides some fairness. When I finished reading the question, I could call on the the first person in the queue. I can either “enable” their microphones, which lets them quickly speak. Or I can make them a “panelist,” which lets them broadcast their camera to the entire Zoom room. I chose the former tonight. I think this format may in fact work for lecture classes. Staring at a grid of 36 people is distracting. I don’t know why students need to see their classmates during the session. They should be focused on paying attention to the lecture and taking notes; not starting at their classmate’s puppy. (Yes, lots of students have dogs on their laps during class.) All they need to see is me, and the student who is asking/answering questions at any given time. I may experiment with this approach in the fall.

Also, as a perk, we were able to give a copy of my new book to each person who correctly answered a question. (I was originally slated to do a book signing in Chicago on May 28, but, alas COVID).

Third, presentations should be kept shorter. Attention spans are tight with Zoom. When I have a solo slot, I usually plan to speak for about 40 minutes. Today I spoke for about 25 minutes. Indeed, Jeopardy filled up nearly 18 minutes.

Fourth, there are different ways to handle text Q&As. Some people like to type questions into the cheat feature. Those messages are visible to everyone. Some people like to type questions into the Questions & Answers feature. Those message are only visible to the host. My preference is to disable the chat. To be frank, some questions are not worth answering, and can distract everyone else. I would much rather that the questions will only be visible to the host, who can separate the wheat from the chaff. For example, I can summarize a question, if it is too long, or skip over other questions that are a waste of time. (Something a moderator cannot do in real life!)

Fifth, I much prefer people to ask questions by raising their blue hands. I then call on them to speak. This breaks up the monotony, and livens up the event.

Sixth, the quality of web cameras suck. Truly, they do. Even the most expensive 1080p camera is equivalent to an iPhone 5. We all have stunning cameras on our smart phones, but they cannot easily be used as web cameras. I have begun to research using a mirrorless DSLR camera as a web camera. The process is complicated–and even tougher on Macs. Plus Zoom will block several hardware workarounds. For example, the Zoom desktop app will not work with the Canon DSLR link. You have to use the Chrome browser version. I am not sure what I’ll do with the fall semester. I think that higher-quality streams (think of your favorite YouTube star!) will be easier to watch. But getting the right setup requires a very expensive game of trial-and-error.

Seventh, my new eight-monitor setup worked well, though I tweaked it. I had planned to use my laptop screen for Zoom, and the mini-monitor for lecture notes. I flipped it. I put Zoom on the mini-monitor, and put the reading materials on my laptop screen. This approach let me keep my eyes at a far more natural position during the broadcast.

Finally, here is the video of my Corona event.

My hair keeps getting bigger as the lockdown continues. I think I’ve gained about 3 inches in height! Compare with a video I recorded shortly after the lockdown began.

from Latest – Reason.com https://ift.tt/2BbBUP3
via IFTTT

Section 230 Bootleggers and Baptists

Economics Professor Bruce Yandle developed the concept of “Bootleggers and Baptists.” Often, different groups with different motivations favor the same regulation. For example, who favors prohibition laws? Baptists, because they are morally opposed to alcohol. And Bootleggers, who stand to profit from selling moonshine on the black market. Independently, each group may not be able to advocate for prohibition laws. But when the coalition works together, they can achieve results. Yandle writes, “[Baptists] take the moral high ground, while the bootleggers persuade the politicians quietly, behind closed doors.”

We are seeing a strange “Bootlegger and Baptist”coalition with respect to Section 230. President Trump and other Republicans have called for the repeal of that seminal law. As have Joe Biden and other progressives. Indeed, advocates for revenge porn laws placed a target on Section 230’s back many years ago. They seek to repeal Section 230 for very different reasons. The conservatives think Twitter is biased against conservatives, and is shadow-banning their tweets. And progressives think Twitter is shielding abusive content that affects marginalized groups.

The coalition to support Section 230, I fear, is dwindling. The ACLU is not what it used to be. And tech companies are not particularly sympathetic plaintiffs.

The next Congress may be able to muster bipartisan votes to kill Section 230. But I am skeptical they can adopt far-reaching privacy legislation. Once the preemption argument is gone, states will adopt their own European-style privacy laws. Tech companies would face a patchwork of fifty-one extremely imperfect solutions.

I’ll let you decide which group is the Baptists, and which group is the Bootleggers.

from Latest – Reason.com https://ift.tt/3gJpK0i
via IFTTT

Court Orders: Stop Tweeting About Your Ex-Friend’s Criminal Conviction—Though Tweets Didn’t Use the Man’s Name

From Craft v. Fuller, decided Wednesday by the Florida Court of Appeal (written by Judge Craig Villanti, joined by Judges Morris Silberman and Matthew Lucas); that court has compiled a pretty good record in recent years of reversing such overbroad orders. The facts:

Craft and Fuller are former friends and business partners who had a falling out of some sort several years ago. Since the falling out, they have filed petitions for injunction against stalking against each other at various times. In October 2018, they agreed to leave each other alone, and they voluntarily dismissed their respective injunction petitions.

Nevertheless, shortly thereafter, Craft began posting tweets on his own personal Twitter feed using the hashtag “spoofingschmuck.” Some of these tweets contained other comments as well, but none of them referenced Fuller by name. Fuller does not follow Craft on Twitter; however, some of Fuller’s friends and family told him about Craft’s tweets, and Fuller believed that those tweets were a direct reference to him because he had been arrested in the past for spoofing.

{Wikipedia defines “caller ID spoofing,” which is what Craft tweeted about in this case, as “the practice of causing the telephone network to indicate to the receiver of a call that the originator of the call is a station other than the true originating station. This can lead to a caller ID display showing a phone number different from that of the telephone from which the call was placed.” Florida law makes caller ID spoofing a crime under certain circumstances.}

In response to being notified of these tweets, Fuller filed a new petition for injunction against Craft. In that petition, Fuller alleged that Craft’s tweets using the “spoofingschmuck” hashtag were directed at him and that as a result of these tweets, he had suffered substantial emotional distress. At a hearing on the petition, Fuller testified that while he does not follow Craft on Twitter, the fact that friends and family notified him of Craft’s tweets demonstrated that other people believed the posts to be about Fuller. Fuller also testified that because of his prior arrests for spoofing and the prior antagonism between the parties, he had suffered substantial emotional distress over the tweets, including losing the ability to sleep and eat.

For his part, Craft denied that the tweets were in reference to Fuller. Instead, he testified that he was annoyed by spoofing in general and that he was using this hashtag to track spoofed calls to his phone in a way that would allow him to express his annoyance and disdain for anyone who would make spoof calls. He also testified that he enjoys posting tweets and uses it as a means of entertainment.

After considering this evidence, the trial court concluded that Craft’s tweets were “directed at” Fuller and that a reasonable person in Fuller’s position, i.e., one who had been arrested several times for spoofing, would suffer substantial emotional distress over the tweets. The court also concluded that Craft’s tweets served no purpose other than harassment. Based on these conclusions, the court entered a five-year injunction against Craft, which he now appeals.

The law (emphasis in original):

Section 784.0485(1), Florida Statutes (2014), provides that “[f]or the purposes of injunctions for protection against stalking under this section, the offense of stalking shall include the offense of cyberstalking.” Section 784.048(1)(d) defines cyberstalking as “engag[ing] in a course of conduct to communicate, or to cause to be communicated, words, images, or language by or through the use of electronic mail or electronic communication, directed at a specific person, causing substantial emotional distress to that person and serving no legitimate purpose.” Harassment is “a course of conduct directed at a specific person which causes substantial emotional distress … and serves no legitimate purpose.” § 784.048(1)(a). Thus, cyberstalking is harassment via electronic communications.

The court concluded that the “directed at a specific person” requirement wasn’t satisfied, partly because Fuller wasn’t named and partly because Craft’s tweets were only about Fuller, rather than being sent to him:

[T]o be entitled to the injunction, Fuller was required to prove that Craft’s tweets were “directed at a specific person,” namely him, that a reasonable person would have suffered substantial emotional distress as a result of the tweets, and that the tweets served no legitimate purpose….

This court and others have held that postings on one’s own social media page do not constitute actions “directed at a specific person” as a matter of law. For example, in Horowitz v. Horowitz (Fla. 2d DCA 2015), this court held that postings on the defendant’s own Facebook page were not “directed at” his ex-wife….

Similarly, in Logue v. Book (Fla. 4th DCA 2019), the Fourth District held that tweets and other social media posts, even though they clearly referred to the petitioner, did not constitute conduct “directed at” the petitioner because such tweets and posts are available for all to see and therefore are directed at a broad audience, of which the petitioner is only one. And in David v. Textor (Fla. 4th DCA 2016), the court held that “where comments are made on an electronic medium to be read by others, they cannot be said to be directed to a particular person.” See also Chevaldina v. R.K./FL Mgmt., Inc. (Fla. 3d DCA 2014) (“Angry social media postings are now common. Jilted lovers, jilted tenants, and attention-seeking bloggers spew their anger into fiber-optic cables and cyberspace. But analytically, and legally, these rants are essentially the electronic successors of the pre-blog, solo complainant holding a poster on a public sidewalk in front of an auto dealer that proclaimed, ‘DON’T BUY HERE! ONLY LEMONS FROM THESE CROOKS!'”); compare United States v. Cassidy (D. Md. 2011) (comparing Twitter postings to papers tacked to a bulletin board and noting that unlike the case with a telephone call, letter, or email specifically addressed to and directed at another person, “[o]ne does not have to walk over and look at another person’s bulletin board”).

Here, the evidence at the hearing established that the disputed tweets were posted on Craft’s own personal Twitter feed. These tweets did not reference Fuller by name, and Craft did not “tag” or otherwise draw Fuller’s attention to the tweets. Instead, the tweets were simply expressions of Craft’s annoyance with whomever may have been spoofing him.

As tweets posted on Craft’s own Twitter feed, they were not “directed at” any specific person but were instead directed at his entire collection of followers, which notably did not include Fuller. And even if one or more of the tweets may have been an indirect reference to Fuller, such indirect references posted on a private Twitter feed are insufficient as a matter of law to support a conclusion that the tweets were “directed at” Fuller. Therefore, Fuller failed to prove, as a matter of law, that Craft’s tweets constituted a course of conduct “directed at” Fuller for purposes of the cyberstalking statute….

The court also concluded that the substantial-emotional-distress element wasn’t satisfied:

In addition to showing that the tweets were “directed at” him, Fuller was also required to prove that an objectively reasonable person would have suffered substantial emotional distress as a result of the tweets… Case law shows that the bar for establishing that a reasonable person would suffer substantial emotional distress is set fairly high….

Here, the record shows that Craft’s tweets were neither threatening nor menacing nor hostile nor, frankly, even embarrassing. They did not mention Fuller by name, they did not tag Fuller so as to single him out, and they did not occur in response to some otherwise threatening event that might have changed their character. No objectively reasonable person—not even one with a prior arrest for spoofing—would have suffered “substantial emotional distress” as a result of these tweets. Therefore, Fuller’s evidence was insufficient to prove this element as well….

And the court concluded that the no-legitimate-purpose element wasn’t satisfied, either:

Fuller was also required to prove that Craft’s tweets served no legitimate purpose, i.e., that they served no purpose other than to harass Fuller. The trial court concluded that the tweets had no legitimate purpose based solely on its earlier finding that the tweets were “directed at” Fuller. Again, the court did not apply the proper legal standard….

In this case, the only evidence on the issue of the purpose of the tweets was Craft’s testimony that they were a way for him to log the prank calls he received and that it was entertaining for him to do so in this fashion. The trial court rejected this explanation, finding that it was not credible. And having rejected Craft’s testimony, the court then found that his tweets had no legitimate purpose solely based on its earlier ruling that the tweets were “directed at” Fuller. This ruling is in contravention of the law for two reasons.

First, the trial court misapplied the applicable burdens of proof. Regardless of how misguided Craft’s tweets may have been, Fuller had the burden to prove that they served no purpose other than to harass Craft. Craft did not have the burden to prove that his tweets had a legitimate purpose; Fuller had the burden to prove that they did not. This he failed to do.

Second, the mere fact that tweets or other communications are “directed at” an individual does not establish, as a matter of law, that they have no legitimate purpose. As long as there is a reason for the communications other than harassment, the communications will have a legitimate purpose even if they are directed at someone who does not welcome them. See O’Neill (communication to advise person of a documentary was a legitimate purpose); Goudy (communications about dance team activities had a legitimate purpose); Alter (communications about a loan repayment had a legitimate purpose). The court could not simply rely on its finding that the tweets were “directed at” Fuller to also conclude that they ipso facto had no legitimate purpose. And the evidence presented here supports no such conclusion.

The court closed with this:

We also take this opportunity to remind the parties that injunctions “are not a panacea to be used to cure all social ills. In fact, nowhere in the statutory catalog of improper behavior is there a provision for court-ordered relief against uncivil behavior.” The parties agreed in 2018 to go their separate ways and leave each other alone. It would behoove them to honor this agreement….

 

from Latest – Reason.com https://ift.tt/2XeBWye
via IFTTT

California’s COVID-19 Shutdown Was Driven by Science. Until It Suddenly Wasn’t.

In response to Californians who were protesting his lockdown orders, Gov. Gavin Newsom in April politely encouraged them to follow social-distancing practices while protesting and assured all Californians that his COVID-19 responses would not be driven by public opinion or other similarly low-brow concerns.

“We are going to do the right thing, not judge by politics, not judge by protests, but by science,” the governor said.

As I noted recently, “science” isn’t a black-and-white, Ten Commandments sort of thing. It is a method for evaluating the best-known data. It shouldn’t be used as a mantra—or a cudgel to beat opponents into submission. It changes. Scientific forecasts are speculative and often wrong. Lawmakers have the responsibility to weigh non-scientific concerns, including those involving our liberties, and not just blindly follow what select scientists say.

Nevertheless, we all assume the governor was saying that he was following the best scientifically available information to determine when he—through his largely unchecked emergency executive powers—would let Californians reopen their businesses, leave their homes, go back to work and head to the beaches and parks again. That sounds perfectly reasonable, but it’s interesting how rapidly the governor’s “science” has changed.

Around a week ago, Newsom’s “science” had called for a little loosening in the rules, but for a continuation of the stay-at-home orders. He had allowed some counties to petition for a quicker reopening, but imposed pages of tough restrictions on them. He sent regulators to oversee Yuba and Sutter counties and threatened to yank their aid after they defied the governor’s orders. His “science” was clear: The lockdowns must continue.

Then, without much notice, the governor last week announced a much-broader reopening, which seemed to take most Californians by surprise. The governor declared that he was giving local governments the go-ahead to move quickly based on their particular understanding of their own regional conditions. This includes a likely reopening of shopping malls and dine-in service at restaurants.

A KPCW reporter asked Newsom how he could allow further openings as the number of COVID-19 cases increases by thousands daily. “We never experienced the peaks that many other parts of the country experienced. And we’re seeing not only stability, but we’re seeing a decline over a two-week extended period of hospitalizations and number of patients in ICUs,” the governor said.

The governor also said his new rules are based on “data” showing that the state has enough hospital space and protective gear.  Of course, such information has been pretty obvious for weeks. In reality, the science didn’t change as much as the standard by which the state evaluates the science. Previously, the governor forbade counties from expanding any reopening unless there had been no deaths there from COVID-19 over a two-week period.

Now, as the Los Angeles Times reported, “The new standard removes the death rate requirement and replaces it with a more generous threshold based on rates of newly confirmed cases. Counties will be able to move toward a more expansive reopening if they can show fewer than 25 coronavirus cases per 100,000 residents in the last 14 days—a standard that was originally 1 new case per 10,000 residents.”

Sure, California has made progress in dealing with COVID-19 infections, but there have been no seismic shifts on that front. It’s like the New Math, which focused students’ attention on alternative math concepts. Now we can also embrace the New Science.

Obviously, there were no substantive changes in the medical science, but there were serious changes in two other important fields: economic science and political science. The governor knows that the Trump administration is likely to give California and four other Western states the $1 trillion bailout they have requested at half past never.

Newsom recently announced that California has gone from a surplus to a $54 billion deficit—and has burned through its rainy-day fund. Union officials are upset about the proposed 10-percent public-employee salary cuts. If the state’s economy doesn’t get started soon, then Democrats will have to give up their big-spending dreams and the pension funds could start circling the drain.

The shutdowns have created an enormous economic problem, the extent of which might take months to become fully evident.  Wouldn’t you love to be a fly on the wall in any conversation between Newsom and California Public Employees’ Retirement System (CalPERS) officials?

Politically, the natives are getting restless. Rural counties are in outright defiance. Even residents of urban areas are largely ignoring the restrictions. As longtime Capitol columnist George Skelton recently noted, Newsom has “barely been staying one step ahead of rural rebels who have been challenging his control and testing him” and “has wisely relented.”

That’s exactly right. This is excellent news, by the way. It shows that the governor is finally looking at costs and benefits. But don’t kid yourself. None of it has anything to do with “science.”

This column was first published in the Orange County Register.

from Latest – Reason.com https://ift.tt/2ZPDC2L
via IFTTT

The Hunt

The Hunt is a movie that intended to use the familiar, vicious fiction trope of the rich hunting the poor for sport to offer a satirical take on modern politics. The hunters in this case (led by a brittle, vengeful Hilary Swank) are liberal urban elites. The victims are so-called “deplorables” (yes, the term is used) who espouse populist conservative rhetoric.

A dozen of these Trumpists are kidnapped and forced to run or fight for their lives. Most participants end up brutally killed, with Crystal (Betty Gilpin) as the final “red state” survivor attempting to bring the whole sick scheme down.

The movie was supposed to be released in August 2019, but the trailers drew fire from conservatives (including President Donald Trump), who believed The Hunt was deliberately fostering hatred toward them. It finally got its theatrical release in March.

The outrage was undeserved; the right-wing critics missed the point of this apparent product of the Hollywood leftists they hate and fear. It is very clear in The Hunt that we’re not supposed to be rooting for the petty, whiny, privileged hunters, who talk in the language of social justice buzzwords and are, indeed, the villains of the story. The deplorables may be under-educated blowhards who believe in conspiracies, but they are obviously the victims. Crystal—partly because she eschews politics entirely—is the only character worth rooting for.

from Latest – Reason.com https://ift.tt/36Fy1xz
via IFTTT

Brickbat: Essentially Dumb

A French appellate court has upheld a lower court ruling that barred Amazon from selling non-essential goods during the coronavirus pandemic. The ruling limits Amazon’s sales to food, medical supplies and hygiene products. Amazon faces a fine of  €100,000 (about $108,000) for every delivery that violates the court ruling.

from Latest – Reason.com https://ift.tt/2XfcFnM
via IFTTT

First thoughts on the section 230 executive order

For all the passion it has unleashed, President Trump’s executive order on section 230 of the Communications Decency Act is pretty modest in impact.  It doesn’t do anything to undermine the part of section 230 that protects social media from liability for the things that its users say. That’s paragraph (1) of section 230(b), and the order practically ignores it.

Instead, the order is all about paragraph (2), which protects platforms from liability when they remove or restrict certain content: “No provider or user of an interactive computer service shall be held liable on account of  … any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.”

This makes some sense in terms of the President’s grievance.  He isn’t objecting to Twitter’s willingness to give a platform to people he disagrees with.  He objects to Twitter’s decision to cordon off his speech with a fact-check warning, as well as all the other occasions on which Twitter and other social media platforms have taken action against conservative speech. So it makes sense for him to focus on the provision that seems to immunize biased and pretextual decisions to downgrade viewpoints unpopular in the Valley.

(I note here that the existence of a liberal bias in the application of social media content mediation is heavily contested, especially by commentators on the left. They point out, correctly, that the evidence of a left-leaning bias is anecdotal and subjective. Of course the same could be said of left-leaning bias in media outlets like the Washington Post or the New York Times. I’m friends with many reporters who deny such a bias exists. Yet most readers of these and other traditional media recognize that there is bias at work there—rarely reporting the facts, but often in deciding which stories are newsworthy, or how the facts are presented, or past events are summarized. If you are sure there’s no bias at work in the mainstream press, then I can’t persuade you that the same dynamic is at work on social media’s content moderation teams.  But if you have seen even a glimmer of liberal bias in the New York Times, you might ask yourself why there would be less in the decisions of Silicon Valley’s content police, whose decisions are often made in secret by unaccountable young people who have not been inculcated in a journalistic ethic of objectivity.)

What’s interesting and useful in the order’s focus on content derogation is that it addresses precisely the claim that anticonservative bias isn’t real. For it is aimed at bringing speech suppression decisions into the light, where we can all evaluate them.

In fact, that’s pretty much all it’s aimed at.  The order really only has two and a half substantive provisions, and they’re all designed to increase the transparency of takedown decisions.

The first provision tells NTIA (the executive branch’s liaison to the FCC) to suggest a rulemaking to the FCC. The purpose of the rule is to spell out what it means for the tech giants to carry out their takedown policies “in good faith.” The order makes clear the President’s view that takedowns are not “taken in good faith if they are “deceptive, pretextual, or inconsistent with a provider’s terms of service” or if they are “the result of inadequate notice, the product of unreasoned explanation, or [undertaken] without a meaningful opportunity to be heard.” This is not a Fairness Doctrine for the internet; it doesn’t mandate that social media show balance in their moderation policies. It is closer to a Due Process Clause for the platforms.  They may not announce a neutral rule and then apply it pretextually. And the platforms can’t ignore the speech interests of their users by refusing to give users even notice and an opportunity to be heard when their speech is suppressed.

The second substantive provision is similar. It asks the FTC, which has a century of practice disciplining the deceptive and unfair practices of private companies, to examine social media takedown decisions through that lens.  The FTC is encouraged (as an independent agency it can’t be told) to determine whether entities relying on section 230 “restrict speech in ways that do not align with those entities’ public representations about those practices.”

(The remaining provision is an exercise of the President’s sweeping power to impose conditions on federal contracting. It tells federal agencies to take into account the “viewpoint-based speech restrictions imposed by each online platform” in deciding whether the platform is an “appropriate” place for the government to post its own speech. It’s hard to argue with that provision in the abstract. Federal agencies have no business advertising on, say, Pornhub. In application, of course, there are plenty of improper or unconstitutional ways the policy could play out. But as a vehicle for government censorship it lacks teeth; one doubts that the business side of these companies cares how many federal agencies maintain their own Facebook pages or Twitter accounts. And in any event, we’ll have time to evaluate this sidecar provision when it is actually applied.)

That’s it.  The order calls on social media platforms to explain their speech suppression policies and then to apply them honestly. It asks them to provide notice, a fair hearing, and an explanation to users who think they’ve been treated unfairly or worse by particular moderators.

I’ve had many conversations with participants in the debate over the risks arising from social media’s sudden control of what ordinary Americans (or Brazilians or Germans) can say to their friends and neighbors about the issues of the day. That is a remarkable and troubling development for those of us who hoped the internet would bring a flowering of  views free from the intermediation of traditional sources. But you don’t have to be a conservative to worry about how this unprecedented power could be abused.

In another context, I have offered a rule of thumb for evaluating new technology: You don’t really know how evil a technology can be until the engineers who depend on it for employment begin to fear for their jobs.  Today, social media’s power is treated by the companies themselves as a modest side benefit of their astounding rise to riches; they can stamp out views they hate as a side gig while tending to the real business of extending their reach and revenue. But every one of us should wonder, “How they will use that power when the ride ends and their jobs are at risk?” And, more to the point, “How will we discover what they’ve done?”

Such questions explain why even those who don’t lean to the right think that the companies’ control of our discourse needs more scrutiny. There are no easy ways to discipline the power of Big Tech in a country that has a first amendment, but the answer most observers offer is more transparency.

We need, in short, to know more about when and how and why the big platforms decide to suppress our speech.

This executive order is a good first step toward finding out.

from Latest – Reason.com https://ift.tt/2AfB9UI
via IFTTT

Minneapolis Police Killed George Floyd, Then Failed To Protect Property Owners From Riots

Police in Minneapolis catalyzed Wednesday night’s violent protests by killing George Floyd on Monday. They’ve since done a terrible job of protecting innocent property owners from being victimized by the rioting that’s erupted in response to Floyd’s death.

Floyd was killed Monday night after being stopped by four officers with the Minneapolis Police Department (MPD) on suspicion of forgery. During his arrest, one of the officers held his knee on Floyd’s neck for eight minutes while the man complained that he couldn’t breathe. Floyd later died in the hospital.

Video of the incident, and later factual discrepancies in the police account of the event, was enough to get all four officers fired on Tuesday, and for the U.S. Department of Justice to open a civil rights investigation into Floyd’s death.

Neither move has been enough to mollify many in Minneapolis, who’ve taken to the streets for two nights of demonstrations that have turned increasingly violent.

On Tuesday, protestors vandalized police vehicles and threw rocks at a local MPD precinct building where the four fired officers involved in Floyd’s death were assigned. Police responded with rubber bullets and tear gas.

Things escalated dramatically last night when further demonstrations resulted in the looting of local businesses. At least 16 buildings were damaged in the protests, according to the city’s fire chief.

Videos and photos of the protests and their aftermath show an Autozone being torched, a Target being looted, and an under-construction apartment complex being set on fire.

One of the few bright spots was video captured by reporters of several armed men protecting a tobacconist from rioters. Their presence could well have prevented the business from being vandalized or even destroyed.

That these four amateurs were able to protect this one business raises the question of why the city’s more numerous and better equipped professional police weren’t able to protect other businesses in a similar fashion.

Police departments exist, at least on paper, in order to protect people’s rights and people’s property. Over the past couple of days, police in Minneapolis have proven unable to do either.

Minneapolis City Councilmember Jeremiah Ellison summed up their failure pretty well in a tweet.

Obviously, the destruction of businesses that had nothing to do with Floyd’s death is unjustified. Anyone guilty of vandalism or theft during the past few days of protest deserves to be punished.

None of this relieves police of their responsibility to ensure public order or protect innocent people and businesses from being violated.

In response to questions about last night’s destruction, Minneapolis Police Chief Medaria Arradondo put the blame onto outside agitators, saying, “People involved in the criminal conduct last night were not known Minneapolitans.” Perhaps he should look closer to home when trying to assign blame for the root of the destruction of the past few days.

from Latest – Reason.com https://ift.tt/3gvVENF
via IFTTT

Platform Immunity and “Platform Blocking and Screening of Offensive Material”

In an earlier post, I talked about the big picture of 47 U.S.C. § 230, the federal statute that broadly protects social media platforms (and other online speakers) from lawsuits for the defamatory, privacy-violating, or otherwise tortious speech of their users. Let’s turn now to some specific details of how § 230 is written, and in particular its key operative provision:

(c) Protection for “Good Samaritan” blocking and screening of offensive material

(1) Treatment of publisher or speaker

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

(2) Civil liability

No provider or user of an interactive computer service shall be held liable on account of—

(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or

(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1). [Codifier’s note: So in original [as enacted by Congress]. Probably should be “subparagraph (A).”]

Now recall the backdrop in 1996, when the statute was enacted. Congress wanted both to promote the development of the Internet, and to protect users from offensive material. Indeed, § 230 was part of a law named “the Communications Decency Act,” which also tried to ban various kinds of online porn; but such a ban was clearly constitutionally suspect, and indeed in 1997 the Court struck down that part of the law.

One possible alternative to a ban was encouraging service providers to block or delete various materials themselves. But a then-recent court decision, Stratton Oakmont v. Prodigy, held that service providers that engage in such content removal become “publishers” who are more liable for tortious speech (such as libel) that they don’t remove. Stratton Oakmont thus created a disincentive for service provider content control, including content control of the sort that Congress liked.

What did Congress do?

[1.] It sought to protect “blocking and screening of offensive material.”

[2.] It did this primarily by protecting “interactive computer service[s]”—basically anyone who runs a web site or other Internet platform—from being held liable for defamation, invasion of privacy, and the like in user-generated content whether or not those services also blocked and screened offensive material. That’s why Twitter doesn’t need to fear losing lawsuits to people defamed by Twitter users, and I don’t need to fear losing lawsuits to people defamed by my commenters.

[3.] It barred such liability for defamation, invasion of privacy, and the like without regard to the nature of the blocking and screening of offensive material (if any). Note that there is no “good faith” requirement in subsection (1).

So far we’ve been talking about liability when a service doesn’t block and screen material. (If the service had blocked an allegedly defamatory post, then there wouldn’t be a defamation claim against it in the first place.) But what if the service does block and screen material, and then the user whose material was blocked sues?

Recall that in such cases, even without § 230, the user would have had very few bases for suing. You generally don’t have a legal right to post things on someone else’s property; unlike with libel or invasion of privacy claims over what is posted, you usually can’t sue over what’s not posted. (You might have breach of contract claims, if the service provider contractually promised to keep your material up, but service providers generally didn’t do that; more on that, and on whether § 230 preempts such claims, in a later post.) Statutes banning discrimination in public accommodations, for instance, generally don’t apply to service providers, and in any case don’t generally ban discrimination based on the content of speech.

Still, subsection (2) did provide protection for service providers even against these few bases (and any future bases that might be developed)—unsurprising, given that Congress wanted to promote “blocking and screening”:

[4.] A platform operator was free to restrict material that it “considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”

  1. The material doesn’t have to be objectionable in some objective sense—it’s enough that the operator “consider[ it] to be” objectionable.
  2. The material isn’t limited to particular speech (such as sexually themed speech): It’s enough that the operator “consider[ it] to be” sexually themed or excessively violent or harassing or otherwise objectionable. If the categories were all of one sort (e.g., sexual), then “otherwise objectionable” might be read, under the legal principle of ejusdem generis, as limited to things of that sort: “when a generic term follows specific terms, the generic term should be construed to reference subjects akin to those with the specific enumeration.” But, as the Ninth Circuit recently noted,
  3. [T]he specific categories listed in § 230(c)(2) vary greatly: Material that is lewd or lascivious is not necessarily similar to material that is violent, or material that is harassing. If the enumerated categories are not similar, they provide little or no assistance in interpreting the more general category…. “Where the list of objects that precedes the ‘or other’ phrase is dissimilar, ejusdem generis does not apply[.]” …
  4. What’s more, “excessively violent,” “harassing,” and “otherwise objectionable” weren’t defined in the definitions section of the statute, and (unlike terms such as “lewd”) lacked well-established legal definitions. That supports the view that Congress didn’t expect courts to have to decide what’s excessively violent, harassing, or otherwise objectionable, because the decision was left for the platform operator.

[5.] Now this immunity from liability for blocking and screening was limited to actions “taken in good faith.” “Good faith” is a famously vague term.

But it’s hard to see how this would forbid blocking material that the provider views as false and dangerous, or politically offensive. Just as providers can in “good faith” view material that’s sexually themed, too violent, or harassing as objectionable, so I expect that many can and do “in good faith” find to be “otherwise objectionable” material that they see as a dangerous hoax, or “fake news” more broadly, or racist, or pro-terrorist. One way of thinking about is to ask yourself: Consider material that you find to be especially immoral or false and dangerous; all of us can imagine some. Would you “in good faith” view it as “objectionable”? I would think you would.

What wouldn’t be actions “taken in good faith”? The chief example is likely actions that are aimed at “offensive material” but rather that are motivated by a desire to block material from competitors. Thus, in Enigma Software Group USA v. Malwarebytes, Inc., the Ninth Circuit reasoned:

Enigma alleges that Malwarebytes blocked Enigma’s programs for anticompetitive reasons, not because the programs’ content was objectionable within the meaning of § 230, and that § 230 does not provide immunity for anticompetitive conduct. Malwarebytes’s position is that, given the catchall, Malwarebytes has immunity regardless of any anticompetitive motives.

We cannot accept Malwarebytes’s position, as it appears contrary to CDA’s history and purpose. Congress expressly provided that the CDA aims “to preserve the vibrant and competitive free market that presently exists for the Internet and other interactive computer services” and to “remove disincentives for the development and utilization of blocking and filtering technologies.” Congress said it gave providers discretion to identify objectionable content in large part to protect competition, not suppress it. In other words, Congress wanted to encourage the development of filtration technologies, not to enable software developers to drive each other out of business.

The court didn’t talk about “good faith” as such, but its reasoning would apply here: Blocking material ostensibly because it’s offensive but really because it’s from your business rival might well be seen as being not in good faith. But blocking material that you really do think is offensive to many of your users (much like sexually themed or excessively violent or harassing material is offensive to many of your users) seems to be quite consistent with good faith.

I’m thus skeptical of the argument in President Trump’s “Preventing Online Censorship” draft Executive Order that,

Subsection 230 (c) (1) broadly states that no provider of an interactive computer service shall be treated as a publisher or speaker of content provided by another person. But  subsection 230(c) (2) qualifies that principle when the provider edits the content provided by others. Subparagraph (c)(2) specifically addresses protections from “civil liability” and clarifies that  a provider is protected from liability when it acts in “good faith” to restrict access to content that it considers to be “obscene, lewd, lascivious, filthy, excessively violent, harassing or otherwise objectionable.” The provision does not extend to deceptive or pretextual actions restricting online content or actions inconsistent with an online platform’s terms of service. When an interactive computer service provider removes or restricts access to content and its actions do not meet the criteria of subparagraph (c)(2)(A), it is engaged in editorial conduct. By making itself an editor of content outside the protections of subparagraph (c)(2)(A), such a provider forfeits any protection from being deemed a “publisher or speaker” under subsection 230(c)(1), which properly applies only to a provider that merely provides a platform for content supplied by others.

As I argued above, § 230(c)(2) doesn’t qualify the § 230(c)(1) grant of immunity from defamation liability (and similar claims)—subsection (2) deals with the separate question of immunity from liability for wrongful blocking or deletion, not with liability for material that remains unblocked and undeleted.

In particular, the “good faith” and “otherwise objectionable” language doesn’t apply to § 230(c)(1), which categorically provides that, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider,” period. (Literally, period.)

Removing or restricting access to content thus does not make a service provider a “publisher or speaker”; the whole point of § 230 was to allow service providers to retain immunity from claims that they are publishers or speakers, regardless of whether and why they “block[] and screen[] offensive material.”

Now this does leave the possibility of direct liability for “bad-faith” removal of material. A plaintiff would have to find an affirmative legal foundation for complaining that a private-company defendant has refused to let the plaintiff use the defendant’s facilities—perhaps as Enigma did with regard to false advertising law, or as someone might do with regard to some antitrust statute. The plaintiff would then have to show that the defendant’s action was not “taken in good faith to restrict access to or availability of material that the provider … considers to be … objectionable, whether or not such material is constitutionally protected.”

My sense is that it wouldn’t be enough to show that the defendant wasn’t entirely candid in explaining its reasoning. If I remove your post because I consider it lewd, but I lie to you and say that it’s because I thought it infringed someone’s copyright (maybe I don’t want to be seen as a prude), I’m still taking action in good faith to restrict access to material that I consider lewd; likewise as to, say, pro-terrorist material that I find “otherwise objectionable.” To find bad faith, there would have to be some reason why the provider wasn’t in good faith acting based on its considering material to be objectionable—perhaps, as Enigma suggests, evidence that the defendant was just trying to block a competitor. (I do think that a finding that the defendant breached a binding contract should be sufficient to avoid (c)(2), simply because § 230 immunity can be waived by contract the way other rights can be.)

But in any event, the enforcement mechanism for such alleged misconduct by service providers would have to be a lawsuit for wrongful blocking or removal of posts, based on the limited legal theories that prohibit such blocking or removal. It would not be a surrender of the service provider’s legal immunity for defamation, invasion of privacy, and the like based on posts that it didn’t remove.

from Latest – Reason.com https://ift.tt/2X8xyAK
via IFTTT