Why did Montgomery v. Louisiana even reference “a finding of fact regarding a child’s incorrigibility”?

Yesterday, the Supreme Court decided Jones v. Mississippi. The question presented was “Whether the Eighth Amendment requires the sentencing authority to make a finding that a juvenile is permanently incorrigible before imposing a sentence of life without parole.” The key word here is “incorrigible.” In other words, is the juvenile incapable of being reformed? The 5-4 Court answered this question no. Justice Kavanaugh wrote the majority opinion. He concluded that two recent precedents, Montgomery v. Louisiana (2016) and Miller v. Alabama (2012), do not require a finding of incorrigibility. Justice Sotomayor dissented, and was joined by Justices Breyer and Kagan. The dissenters argued that Montgomery and Miller did impose a finding of incorrigibility. Justice Thomas concurred in judgment, and would have overruled Montgomery.

I am not interested in debating whether the majority or dissent accurately read Montgomery. As a general matter, I presume any Justice Kennedy 5-4 decision is on the chopping blocks. The Court will either limit the precedent to its facts, or stealthily overrule it. My goal here is different. Why did Montgomery even make a reference to incorrigibility? None of the parties briefed that issue. It only came up briefly during oral argument. How did that phrase became the basis for a followup Supreme Court case? Here, I think a time bomb was planted in 2012 that fizzled out.

Let’s start at the beginning. In 2012, the Court split 5-4 in Miller v. Alabama. Justice Kagan wrote the majority opinion. She concluded that the Eighth Amendment bars a sentence of mandatory life in prison without the possibility of parole for juvenile homicide offenders. The case only had one reference to “incorrigibility.” Justice Kagan wrote:

Roper and Graham emphasized that the distinctive attributes of youth diminish the penological justifications for imposing the harshest sentences on juvenile offenders, even when they commit terrible crimes. . . . Similarly, incapacitation could not support the life-without-parole sentence in Graham: Deciding that a “juvenile offender forever will be a danger to society” would require “mak[ing] a judgment that [he] is incorrigible”—but ” ‘incorrigibility is inconsistent with youth.’ ” 560 U. S., at ___ (slip op., at 22) (quoting Workman v. Commonwealth, 429 S. W. 2d 374, 378 (Ky. App. 1968)).

Graham held that the Eighth Amendment does not permit life without parole for nonhomicide crimes. Miller, which concerned mandatory LWOP for homicide crimes, cited Graham’s discussion of incorrigibility. But incorrigibility was in no way essential to the holding of Miller.

Fast-forward four years. Justice Kennedy wrote the majority opinion in Montgomery v. Louisiana (2016). This case held that the rule in Miller was retroactive. Once again, incorrigibility played no role in the litigation. I searched all of the briefs. None of the parties mentioned the word “incorrigible” or “incorrigibility.” Four amicus briefs briefly referenced incorrigibility: ABA, ACLU, Equal justice Initiative, and the Charles Hamilton Houston Institute. But none of these briefs suggested that a finding of incorrigibility should be required to sentence a juvenile defendant to LWOP. Rather, each citation merely quoted from Justice Kagan’s Miller opinion.

During oral argument, only one Justice referenced incorrigibility. You guessed it. It was Justice Kagan. She invoked this concept in an exchange with then-Louisiana SG, and now-Judge Kyle Duncan.

JUSTICE KAGAN: There — there is — there is a process component of Miller, no question about it, where the Court says what courts are supposed to look at is — are the characteristics of youth and are supposed to try to figure out whether these terrible crimes are functions, in part, of immaturity or — or — or not, whether you — you really are looking at an incorrigible defendant. So there is that process component.

The majority opinion in Montgomery had two references to incorrigibility. These references provided the basis for the question presented in Jones v. Mississippi. Here is the first passage:

Louisiana suggests that Miller cannot have made a constitutional distinction between children whose crimes reflect transient immaturity and those whose crimes reflect irreparable corruption because Miller did not require trial courts to make a finding of fact regarding a child’s incorrigibility. That this finding is not required, however, speaks only to the degree of procedure Miller mandated in order to implement its substantive guarantee. When a new substantive rule of constitutional law is established, this Court is careful to limit the scope of any attendant procedural requirement to avoid intruding more than necessary upon the States’ sovereign administration of their criminal justice systems.

This carefully crafted sentence is quite unclear. It can be read in one of two ways. First, Miller “did not require trial courts to make a finding of fact regarding a child’s incorrigibility.” Justice Kavanaugh adopted this reading. Second, the Miller Court did not require the sentencing court in that case to make this finding because of the unique posture of Miller–that is, “in order to implement its substantive guarantee.” But this requirement should be understood as part of the rule in Miller going forward. Justice Sotomayor adopted this latter reading. Again, I am not concerned with whether the majority or dissent is correct. My concern is different. Why was this sentence from Montgomery written in such a strange fashion:

Louisiana suggests that Miller cannot have made a constitutional distinction between children whose crimes reflect transient immaturity and those whose crimes reflect irreparable corruption because Miller did not require trial courts to make a finding of fact regarding a child’s incorrigibility.

Of course Miller imposed no such requirement. Yet this sentence was written. Why? My theory: to allow a future Court to adopt what would become Justice Sotomayor’s reading of Montgomery and Miller.

Montgomery made one other reference to incorrigibility:

Miller, it is true, did not bar a punishment for all juvenile offenders, as the Court did in Roper or Graham. Miller did bar life without parole, however, for all but the rarest of juvenile offenders, those whose crimes reflect permanent incorrigibility. For that reason, Miller is no less substantive than are Roper and Graham.

Again, I do not think this statement is a fair reading of MillerMiller did reference Graham, but did not adopt incorrigibility as part of the test. Still, the sentence is written vaguely enough to be understood in different ways.

In his concurrence, Justice Thomas refers to Montgomery‘s reading of Miller as “Janus-faced”

In a similar Janus-faced demonstration, Montgomery reiterated Miller‘s assurance that “trial courts [need not] make a finding of fact regarding a child’s incorrigibility,” yet decided that “Miller drew a line between children whose crimes reflect transient immaturity and those rare children whose crimes reflect irreparable corruption.”577 U. S., at 209–211.2 These statements cannot be reconciled. 

My tentative conclusion is that this “incorrigibility” language was a time bomb. Professor Rick Hasen explains that a “Time bombs exist when Justices include within a case subtle dicta or analysis not necessary to decide it with an eye toward influencing how the Court will decide a future case.”

Let’s roll the clock back a decade. At the time, those seeking to abolish juvenile LWOP had as strategy. Indeed, the progression of the LWOP cases was predictable. First, hold that mandatory LWOP cannot be applied to non-homicide juvenile crimes. (Graham in 2010). Second, hold that mandatory LWOP cannot be applied to juvenile crimes at all. (Miller in 2012). Third, hold that mandatory LWOP rule is retroactive. (Montgomery in 2016). Fourth, make it even harder for courts to impose LWOP as a matter of discretion. The precise contours of Stage 4 were negotiable. The abolitionists are predictable–at least while Justice Kennedy was the 5th vote. Way back in 2012, it would have been clear what stage 4 would be. We now know that the progressives would never get to stage 4 because the composition of the Court changed. But that fact wasn’t known in January 2016 when Montgomery was decided.

Flash back to Miller (2012). I think the language about incorrigibility was inserted as a time bomb for stage 4. And in Montgomery (2016), a very strange sentence was written about incorrigibility, even though none of the parties briefed the issue. Again, that sentence was to set up the stage 4 end game. Alas, that end game was snapped out of existence when Justice Kennedy retired. Or, to quote Justice Sotomayor, “half of [Miller‘s] reasoning” was blipped. Thanos. Now, the new Roberts Court can simply adopt the reading of that sentence that is consistent with precedent.

Who planted the time bomb? RBG was notorious for burying those devices, like in CLS v. Martinez. But here my money is on Justice Kagan. She wrote Miller. She brought up incorrigibility during oral arguments in Montgomery. It is entirely plausible that during the drafting process, Justice Kagan nudged Justice Kennedy to include the incorrigibility analysis as a setup for the end game. Justice Brennan was the master of time bombs. And I often compare Kagan to Brennan. When I read that strangely crafted sentence, I see Justice Brennan wrangling over the language with an eye to the future. Alas, the Montgomery time bomb fizzled out when the fifth finger vanished.

Of course, this entire post is mere speculation. Fun speculation. But speculation.

from Latest – Reason.com https://ift.tt/3xhAifb
via IFTTT

HBO Documents the Tragic Tale of Lucy, the Chimp Raised Like a Human


LucyChimp_1161x653

Lucy the Human Chimp. Available Thursday, April 29, on HBO Max.

A couple of weeks ago on the distant shores of satellite TV, I stumbled upon an old favorite of mine, Congo, a 1995 film about a safari in search of King Solomon’s fabled diamond mines that runs afoul of a tribe of killer chimpanzees. (Back in the day, it scared a visiting granddaughter catatonic and I had my first full night’s sleep in weeks.) The movie’s MacGuffin was a sweet, lab-raised chimp who could use sign language to talk through a computer doo-hickey—and who knew the mine’s location.

I’m guessing the people behind Congo knew something about Lucy, a very real chimp who had her 15 minutes of fame back in the 1970s—book, Life magazine story, episode of Wild Kingdom, everything but her own reality show. Lucy, raised by a (human) psychologist almost from birth, also learned sign language, not to mention gin-and-tonics—she squeezed the lime with her teeth—along with Playgirl magazine and all the other stuff that makes humanity so special. (Though when Lucy was taken to a drive-in showing of Planet of the Apes, she was merely bored.)

But when she turned rambunctious upon reaching puberty, Lucy was shipped to a chimp reeducation camp in the Gambian jungle, where she had a difficult time of it and was eventually killed by poachers.

The poignant and sometimes disturbing HBO Max documentary Lucy the Human Chimp updates Lucy’s tale, and even more so that of Janis Carter, the human companion who accompanied her back to Gambia for a three-week acclimation trip that stretched into forever. The program inevitably raises some issues about human anthropomorphism and scientific indifference to animal welfare. But mostly it’s a meditation on the difficulties attendant upon falling in love across species lines, something already touchingly familiar to anybody who saw Willard.

Carter was a University of Oklahoma grad student hired to feed Lucy and clean out her cage. Carter was warned not to attempt any BFF stuff; Lucy was already showing some aggressive tendencies, jumping through closed windows and even breaking out to loot the refrigerator of the neighbors next door. Sticking a finger into her cage was less likely to be regarded as a friendly gesture than as an offer of a fast-food sample.

Moreover, there was a daunting intellectual gap between the two; Lucy knew 120 words of English, apparently well beyond the vocabulary of the average University of Oklahoma grad student. When Lucy grew frustrated by the slow speed of Carter’s signing, she repeatedly flashed the signal for “stupid.” Nonetheless, their long afternoons together eventually developed into a friendship. Carter put her back against the cage and allowed Lucy to groom her for vermin; then they reversed positions.

Lucy had been snatched from her mother, a performer in a ramshackle Florida roadside zoo, when she was just two days old and raised as if she were human, a calculated attempt to sort out nature from nurture in chimp behavior. At home, she slept on a Beautyrest mattress, ate oatmeal with raisins for breakfast, then washed it down with coffee and Tang. That made her relocation into a jungle camp nearly impossible. There she lost weight and great patches of her hair fell out.

Carter made the flight to Gambia just to help with a three-week transition, but her alarm at Lucy’s condition made her postpone return to the United States for two weeks, then another three months. In reflective moments, when she wasn’t eating grimy leaves and twigs from the jungle floor and grunting in pretend-pleasure to convince Lucy that this stuff was food, Carter wondered what she was doing, missing classes and rent checks back home. “I had a dog,” she mused. “I had a boyfriend.” As her funds ran low, she moved out of a hotel room into a jungle treehouse. In an even more profound inversion, when she took Lucy and three other rescue chimps out of the camp and onto a jungle island, Carter slept in a cage (“I don’t think anyone told me about the leopards”) while the chimps spent the night on the roof—unleashing a rain of excreta onto her through the bars whenever they heard a scary noise, which was practically all night long.

Years passed. Amy was not acting much more like a chimp, but Carter was. She let her correspondence with the outside world lapse and spoke little English other than frequent and futile repetitions of “Amy—food—eat.” She lost all track of time aside from the seasons and never even thought about returning to the outside world. “I don’t know if I ever became a chimp, so to speak,” she says in an interview taped for the documentary. But she sure takes a long time to answer.

When six years passed, the chimps themselves let Carter know—unambiguously and heartbreakingly—that it was time for her to leave. Even then, she just moved across the river and continued watching them from afar. Today, more than four decades later, her hair slack and gray, she still sleeps in her old cage on the island, tending a steady stream of rescue animals. Nature vs. nurture remains an open question for chimps, and perhaps for their human minders as well.

from Latest – Reason.com https://ift.tt/3tM15Ot
via IFTTT

Why did Montgomery v. Louisiana even reference “a finding of fact regarding a child’s incorrigibility”?

Yesterday, the Supreme Court decided Jones v. Mississippi. The question presented was “Whether the Eighth Amendment requires the sentencing authority to make a finding that a juvenile is permanently incorrigible before imposing a sentence of life without parole.” The key word here is “incorrigible.” In other words, is the juvenile incapable of being reformed? The 5-4 Court answered this question no. Justice Kavanaugh wrote the majority opinion. He concluded that two recent precedents, Montgomery v. Louisiana (2016) and Miller v. Alabama (2012), do not require a finding of incorrigibility. Justice Sotomayor dissented, and was joined by Justices Breyer and Kagan. The dissenters argued that Montgomery and Miller did impose a finding of incorrigibility. Justice Thomas concurred in judgment, and would have overruled Montgomery.

I am not interested in debating whether the majority or dissent accurately read Montgomery. As a general matter, I presume any Justice Kennedy 5-4 decision is on the chopping blocks. The Court will either limit the precedent to its facts, or stealthily overrule it. My goal here is different. Why did Montgomery even make a reference to incorrigibility? None of the parties briefed that issue. It only came up briefly during oral argument. How did that phrase became the basis for a followup Supreme Court case? Here, I think a time bomb was planted in 2012 that fizzled out.

Let’s start at the beginning. In 2012, the Court split 5-4 in Miller v. Alabama. Justice Kagan wrote the majority opinion. She concluded that the Eighth Amendment bars a sentence of mandatory life in prison without the possibility of parole for juvenile homicide offenders. The case only had one reference to “incorrigibility.” Justice Kagan wrote:

Roper and Graham emphasized that the distinctive attributes of youth diminish the penological justifications for imposing the harshest sentences on juvenile offenders, even when they commit terrible crimes. . . . Similarly, incapacitation could not support the life-without-parole sentence in Graham: Deciding that a “juvenile offender forever will be a danger to society” would require “mak[ing] a judgment that [he] is incorrigible”—but ” ‘incorrigibility is inconsistent with youth.’ ” 560 U. S., at ___ (slip op., at 22) (quoting Workman v. Commonwealth, 429 S. W. 2d 374, 378 (Ky. App. 1968)).

Graham held that the Eighth Amendment does not permit life without parole for nonhomicide crimes. Miller, which concerned mandatory LWOP for homicide crimes, cited Graham’s discussion of incorrigibility. But incorrigibility was in no way essential to the holding of Miller.

Fast-forward four years. Justice Kennedy wrote the majority opinion in Montgomery v. Louisiana (2016). This case held that the rule in Miller was retroactive. Once again, incorrigibility played no role in the litigation. I searched all of the briefs. None of the parties mentioned the word “incorrigible” or “incorrigibility.” Four amicus briefs briefly referenced incorrigibility: ABA, ACLU, Equal justice Initiative, and the Charles Hamilton Houston Institute. But none of these briefs suggested that a finding of incorrigibility should be required to sentence a juvenile defendant to LWOP. Rather, each citation merely quoted from Justice Kagan’s Miller opinion.

During oral argument, only one Justice referenced incorrigibility. You guessed it. It was Justice Kagan. She invoked this concept in an exchange with then-Louisiana SG, and now-Judge Kyle Duncan.

JUSTICE KAGAN: There — there is — there is a process component of Miller, no question about it, where the Court says what courts are supposed to look at is — are the characteristics of youth and are supposed to try to figure out whether these terrible crimes are functions, in part, of immaturity or — or — or not, whether you — you really are looking at an incorrigible defendant. So there is that process component.

The majority opinion in Montgomery had two references to incorrigibility. These references provided the basis for the question presented in Jones v. Mississippi. Here is the first passage:

Louisiana suggests that Miller cannot have made a constitutional distinction between children whose crimes reflect transient immaturity and those whose crimes reflect irreparable corruption because Miller did not require trial courts to make a finding of fact regarding a child’s incorrigibility. That this finding is not required, however, speaks only to the degree of procedure Miller mandated in order to implement its substantive guarantee. When a new substantive rule of constitutional law is established, this Court is careful to limit the scope of any attendant procedural requirement to avoid intruding more than necessary upon the States’ sovereign administration of their criminal justice systems.

This carefully crafted sentence is quite unclear. It can be read in one of two ways. First, Miller “did not require trial courts to make a finding of fact regarding a child’s incorrigibility.” Justice Kavanaugh adopted this reading. Second, the Miller Court did not require the sentencing court in that case to make this finding because of the unique posture of Miller–that is, “in order to implement its substantive guarantee.” But this requirement should be understood as part of the rule in Miller going forward. Justice Sotomayor adopted this latter reading. Again, I am not concerned with whether the majority or dissent is correct. My concern is different. Why was this sentence from Montgomery written in such a strange fashion:

Louisiana suggests that Miller cannot have made a constitutional distinction between children whose crimes reflect transient immaturity and those whose crimes reflect irreparable corruption because Miller did not require trial courts to make a finding of fact regarding a child’s incorrigibility.

Of course Miller imposed no such requirement. Yet this sentence was written. Why? My theory: to allow a future Court to adopt what would become Justice Sotomayor’s reading of Montgomery and Miller.

Montgomery made one other reference to incorrigibility:

Miller, it is true, did not bar a punishment for all juvenile offenders, as the Court did in Roper or Graham. Miller did bar life without parole, however, for all but the rarest of juvenile offenders, those whose crimes reflect permanent incorrigibility. For that reason, Miller is no less substantive than are Roper and Graham.

Again, I do not think this statement is a fair reading of MillerMiller did reference Graham, but did not adopt incorrigibility as part of the test. Still, the sentence is written vaguely enough to be understood in different ways.

In his concurrence, Justice Thomas refers to Montgomery‘s reading of Miller as “Janus-faced”

In a similar Janus-faced demonstration, Montgomery reiterated Miller‘s assurance that “trial courts [need not] make a finding of fact regarding a child’s incorrigibility,” yet decided that “Miller drew a line between children whose crimes reflect transient immaturity and those rare children whose crimes reflect irreparable corruption.”577 U. S., at 209–211.2 These statements cannot be reconciled. 

My tentative conclusion is that this “incorrigibility” language was a time bomb. Professor Rick Hasen explains that a “Time bombs exist when Justices include within a case subtle dicta or analysis not necessary to decide it with an eye toward influencing how the Court will decide a future case.”

Let’s roll the clock back a decade. At the time, those seeking to abolish juvenile LWOP had as strategy. Indeed, the progression of the LWOP cases was predictable. First, hold that mandatory LWOP cannot be applied to non-homicide juvenile crimes. (Graham in 2010). Second, hold that mandatory LWOP cannot be applied to juvenile crimes at all. (Miller in 2012). Third, hold that mandatory LWOP rule is retroactive. (Montgomery in 2016). Fourth, make it even harder for courts to impose LWOP as a matter of discretion. The precise contours of Stage 4 were negotiable. The abolitionists are predictable–at least while Justice Kennedy was the 5th vote. Way back in 2012, it would have been clear what stage 4 would be. We now know that the progressives would never get to stage 4 because the composition of the Court changed. But that fact wasn’t known in January 2016 when Montgomery was decided.

Flash back to Miller (2012). I think the language about incorrigibility was inserted as a time bomb for stage 4. And in Montgomery (2016), a very strange sentence was written about incorrigibility, even though none of the parties briefed the issue. Again, that sentence was to set up the stage 4 end game. Alas, that end game was snapped out of existence when Justice Kennedy retired. Or, to quote Justice Sotomayor, “half of [Miller‘s] reasoning” was blipped. Thanos. Now, the new Roberts Court can simply adopt the reading of that sentence that is consistent with precedent.

Who planted the time bomb? RBG was notorious for burying those devices, like in CLS v. Martinez. But here my money is on Justice Kagan. She wrote Miller. She brought up incorrigibility during oral arguments in Montgomery. It is entirely plausible that during the drafting process, Justice Kagan nudged Justice Kennedy to include the incorrigibility analysis as a setup for the end game. Justice Brennan was the master of time bombs. And I often compare Kagan to Brennan. When I read that strangely crafted sentence, I see Justice Brennan wrangling over the language with an eye to the future. Alas, the Montgomery time bomb fizzled out when the fifth finger vanished.

Of course, this entire post is mere speculation. Fun speculation. But speculation.

from Latest – Reason.com https://ift.tt/3xhAifb
via IFTTT

HBO Documents the Tragic Tale of Lucy, the Chimp Raised Like a Human


LucyChimp_1161x653

Lucy the Human Chimp. Available Thursday, April 29, on HBO Max.

A couple of weeks ago on the distant shores of satellite TV, I stumbled upon an old favorite of mine, Congo, a 1995 film about a safari in search of King Solomon’s fabled diamond mines that runs afoul of a tribe of killer chimpanzees. (Back in the day, it scared a visiting granddaughter catatonic and I had my first full night’s sleep in weeks.) The movie’s MacGuffin was a sweet, lab-raised chimp who could use sign language to talk through a computer doo-hickey—and who knew the mine’s location.

I’m guessing the people behind Congo knew something about Lucy, a very real chimp who had her 15 minutes of fame back in the 1970s—book, Life magazine story, episode of Wild Kingdom, everything but her own reality show. Lucy, raised by a (human) psychologist almost from birth, also learned sign language, not to mention gin-and-tonics—she squeezed the lime with her teeth—along with Playgirl magazine and all the other stuff that makes humanity so special. (Though when Lucy was taken to a drive-in showing of Planet of the Apes, she was merely bored.)

But when she turned rambunctious upon reaching puberty, Lucy was shipped to a chimp reeducation camp in the Gambian jungle, where she had a difficult time of it and was eventually killed by poachers.

The poignant and sometimes disturbing HBO Max documentary Lucy the Human Chimp updates Lucy’s tale, and even more so that of Janis Carter, the human companion who accompanied her back to Gambia for a three-week acclimation trip that stretched into forever. The program inevitably raises some issues about human anthropomorphism and scientific indifference to animal welfare. But mostly it’s a meditation on the difficulties attendant upon falling in love across species lines, something already touchingly familiar to anybody who saw Willard.

Carter was a University of Oklahoma grad student hired to feed Lucy and clean out her cage. Carter was warned not to attempt any BFF stuff; Lucy was already showing some aggressive tendencies, jumping through closed windows and even breaking out to loot the refrigerator of the neighbors next door. Sticking a finger into her cage was less likely to be regarded as a friendly gesture than as an offer of a fast-food sample.

Moreover, there was a daunting intellectual gap between the two; Lucy knew 120 words of English, apparently well beyond the vocabulary of the average University of Oklahoma grad student. When Lucy grew frustrated by the slow speed of Carter’s signing, she repeatedly flashed the signal for “stupid.” Nonetheless, their long afternoons together eventually developed into a friendship. Carter put her back against the cage and allowed Lucy to groom her for vermin; then they reversed positions.

Lucy had been snatched from her mother, a performer in a ramshackle Florida roadside zoo, when she was just two days old and raised as if she were human, a calculated attempt to sort out nature from nurture in chimp behavior. At home, she slept on a Beautyrest mattress, ate oatmeal with raisins for breakfast, then washed it down with coffee and Tang. That made her relocation into a jungle camp nearly impossible. There she lost weight and great patches of her hair fell out.

Carter made the flight to Gambia just to help with a three-week transition, but her alarm at Lucy’s condition made her postpone return to the United States for two weeks, then another three months. In reflective moments, when she wasn’t eating grimy leaves and twigs from the jungle floor and grunting in pretend-pleasure to convince Lucy that this stuff was food, Carter wondered what she was doing, missing classes and rent checks back home. “I had a dog,” she mused. “I had a boyfriend.” As her funds ran low, she moved out of a hotel room into a jungle treehouse. In an even more profound inversion, when she took Lucy and three other rescue chimps out of the camp and onto a jungle island, Carter slept in a cage (“I don’t think anyone told me about the leopards”) while the chimps spent the night on the roof—unleashing a rain of excreta onto her through the bars whenever they heard a scary noise, which was practically all night long.

Years passed. Amy was not acting much more like a chimp, but Carter was. She let her correspondence with the outside world lapse and spoke little English other than frequent and futile repetitions of “Amy—food—eat.” She lost all track of time aside from the seasons and never even thought about returning to the outside world. “I don’t know if I ever became a chimp, so to speak,” she says in an interview taped for the documentary. But she sure takes a long time to answer.

When six years passed, the chimps themselves let Carter know—unambiguously and heartbreakingly—that it was time for her to leave. Even then, she just moved across the river and continued watching them from afar. Today, more than four decades later, her hair slack and gray, she still sleeps in her old cage on the island, tending a steady stream of rescue animals. Nature vs. nurture remains an open question for chimps, and perhaps for their human minders as well.

from Latest – Reason.com https://ift.tt/3tM15Ot
via IFTTT

Did Politics Ruin the Oscars?


zumaamericasthirty376555

If nothing else, the pandemic has shown us the weaknesses of our cultural and political institutions. 

Case in point, this year’s Oscars. They have problems. Big problems. 

You can see those problems in the public’s relationship to the movies up for best picture, the ceremony’s top award. 

For one thing, it’s not clear anyone has seen any of the movies nominated. For another, it’s not even clear that anyone has heard of them. OK, fine, I don’t actually mean anyone. I’m taking a little bit of dramatic license here. (I myself have somehow seen all eight of them.) But this is the biggest event on the Hollywood awards calendar, the biggest night of the year for Tinseltown. It’s the night when America celebrates the movies. Yet this year, almost no one is paying attention. 

Polling data indicate shockingly low levels of awareness about this year’s biggest nominees. Obviously, the pandemic is a factor. Most theaters in the United States were closed for the better part of 2020, and many big releases were delayed. As a result, there wasn’t much marketing, either. The Hollywood hype machine effectively shut down. 

Even still, the numbers are dismal. Industry research firm Guts + Data surveyed 1,500 active entertainment consumers—in theory, folks who are plugged in and interested—and found fewer than half of those surveyed were aware of any of the nominees up for the big prize. These aren’t necessarily bad movies: I’m quite enamored with the searching openness of Nomadland, this year’s best picture front-runner, and the anxiety and empathy on display in The Father. Both Promising Young Woman and Mank are stylish and pointed. But will the quality of these films matter if no one tunes into this year’s ceremony, which airs Sunday night? (Frankly, it’s not clear how many people even know that.)

There is a real possibility that this year’s show will be a historic bomb. Awards show ratings, from the Grammys to the Golden Globes, have taken a nosedive this year, for obvious and understandable reasons: It’s hard to gin up much enthusiasm for glitz and glamour after a year when everyone has been stuck inside, the offerings have been slim and strange, and the usual glitz and glamour hardly shine through when most of the attendees are attending from their living rooms, via video conference, audio headaches included. Who wants another is-this-thing-muted? Zoom call in his life? 

Thankfully, this years Oscars won’t be Sunday-night conference-call-of-the-stars. Director Steven Soderbergh apparently had it written into his contract that there would be no acceptance speeches from home. Sorry, Room Rater!

Even still, it will be a strange affair. The stars won’t appear from their living rooms, but there will be no live studio audience, and speeches will be delivered at a handful of satellite studios strewn throughout the globe. Instead of a Zoom call, it will be a cable news split screen event, with everyone siloed in their own professionally produced boxes. The movies are supposed to bring people together; in the pandemic year, they’re deliberately keeping people apart. 

In the process, the Academy Awards may separate themselves from viewers as well: If ratings for other recent awards ceremonies are any guide, Sunday’s broadcast could easily lose half its audience versus last year. 

Oscar viewership, of course, has been slowly declining for the better part of two decades. The last peak years were 1998 and 2004, when Titanic and Lord of the Rings: The Return of the King won best picture, respectively. Whatever else you think of those movies—James Cameron’s second-worst film and the third-best Lord of the Rings movie, respectively—they were huge, huge hits. 

This suggests a fairly obvious correlation: People tune into the ceremony when it’s likely to honor movies they’ve seen. And, to reiterate my earlier point, no one—even consumers who claim to be interested in Hollywood’s output—has even heard of the major nominees this year: According to the Guts + Data poll, just 18 percent of respondents were aware of Mank, this year’s most nominated film. 

It’s funny I should mention that movie, since it’s about how Hollywood is smug, cynical, self-absorbed, and completely deluded about its own political relevance and purity. Of all the explanations for why the Oscars has failed to capture the public imagination in recent decades, this is the one that has received the most attention. 

And not without reason either. The films in contention have, by at least one count, become more political in recent years, and it’s probably true that the discussion around those films has become more political, at least for the Extremely Online. Movies have become grist for the culture-war-take mill; I’ve ground out a few in my time. 

And then there are the speeches themselves, about which The New York Times recently had this to say: “Increasingly, the ceremonies are less about entertainment honors and more about progressive politics, which inevitably annoys those in the audience who disagree. One recent producer of the Oscars, who spoke on the condition of anonymity to discuss confidential metrics, said minute-by-minute post-show ratings analysis indicated that ‘vast swaths’ of people turned off their televisions when celebrities started to opine on politics.”

Fair enough. People love to hate celebrity politics; at this point, hating celebrity politics may well be more popular than the Oscars.

But there is something else at work here too. The Oscars have always served—or at least aimed to serve—as a kind of institutional stamp of approval, a declaration, by united Hollywood decree, that this was the best movie, the best actress, the best elaborate Victorian ball gown (I’m sorry, the best costume), the most and loudest sound—and perhaps even the best sound as well. 

The definitiveness of these proclamations, handed down from on high by the mysterious people who make up the Academy, has always been an exaggeration, at best, and more accurately a Hollywood fiction foisted upon the world.

I don’t mean to say that the awards and nominations never went to quality films, but frequently they went to awful or, worse, forgettable material. Green Book is a forgettable feel-good film. Crash, which won best picture in 2006, is no one’s idea of a classic, unless you mean the 1996 David Cronenberg sex-and-auto-wrecks film that wasn’t nominated. Without Googling, does anyone even remember what Lion, nominated for best picture just four years ago, was about? 

Too many Oscar nominees and winners are not only obviously not the year’s best picture, they are obviously not the year’s third-best picture, or even its seventeenth. Add to this the fact that the awards increasingly go to movies few have seen (or—and I must repeat myself—even heard of), and in recent years that fiction has begun to sag. In this pandemic year, in which major films that people might have heard of were largely delayed and theaters were mostly dark, it has collapsed entirely. 

The real problem with the Oscars, then, is not one the pandemic caused but one it revealed: The Academy pretended to know what was best, period. The whole idea was that it knew what was best for everyone. That was a pretense it could keep up when Hollywood was at least trying, on a regular basis, to make movies that had something like universal appeal, and weren’t about superheroes — films, for example, like James Cameron’s second-worst movie and the third-best Lord of the Rings film. Which, again, may not be great movies. But they were movies that practically everyone had heard of, that didn’t feel like sermons, and that everyone—or an awful lot of people—could relate to, somehow or another. 

The third act complicator here is not so much politics, per se, but relevance. The Oscars are an institution, a kind of (privately held) public trust, and its success depends on providing some sort of value to ordinary people outside the bubble of diehards and professional moviemakers and viewers. To succeed, they must demonstrate some value, which means connecting with people who have a choice to tune out, to watch something else, to stream Disney+ or TikTok or chat on Clubhouse or Discord or record themselves killing 553 people with a fish in the video game Hitman 2. Netflix offers a homepage of personally tailored recommendations every day; the Oscars offer a handful of undifferentiated recommendations once a year. 

What this year’s dismal awareness numbers suggest, then, is not only that Hollywood failed in an unusual year to market and P.R.-blitz viewers into some vague sense of what was up for the big awards, but that Hollywood has failed for years and years to supply enough value that potential viewers might seek out those films themselves. Political speeches that turn off viewers are merely an outgrowth of the underlying malady: the Academy Awards have become irrelevant. And the one thing a cultural institution cannot lose is its relevance. 

Movies are supposed to be a distraction from the turmoils of the world, a brief escape from the banality and difficulty of daily life. The whole point of the Oscars, and maybe the whole point of Hollywood, is to try to get people to pay attention—to the movies, to the celebrities, to the filmmakers, to the outfits, and even to the speeches and the causes. But this year, when a lot of people could really have used some form of escapism, they couldn’t even do that. The institution failed. 

from Latest – Reason.com https://ift.tt/3tPwAY0
via IFTTT

Did Politics Ruin the Oscars?


zumaamericasthirty376555

If nothing else, the pandemic has shown us the weaknesses of our cultural and political institutions. 

Case in point, this year’s Oscars. They have problems. Big problems. 

You can see those problems in the public’s relationship to the movies up for best picture, the ceremony’s top award. 

For one thing, it’s not clear anyone has seen any of the movies nominated. For another, it’s not even clear that anyone has heard of them. OK, fine, I don’t actually mean anyone. I’m taking a little bit of dramatic license here. (I myself have somehow seen all eight of them.) But this is the biggest event on the Hollywood awards calendar, the biggest night of the year for Tinseltown. It’s the night when America celebrates the movies. Yet this year, almost no one is paying attention. 

Polling data indicate shockingly low levels of awareness about this year’s biggest nominees. Obviously, the pandemic is a factor. Most theaters in the United States were closed for the better part of 2020, and many big releases were delayed. As a result, there wasn’t much marketing, either. The Hollywood hype machine effectively shut down. 

Even still, the numbers are dismal. Industry research firm Guts + Data surveyed 1,500 active entertainment consumers—in theory, folks who are plugged in and interested—and found fewer than half of those surveyed were aware of any of the nominees up for the big prize. These aren’t necessarily bad movies: I’m quite enamored with the searching openness of Nomadland, this year’s best picture front-runner, and the anxiety and empathy on display in The Father. Both Promising Young Woman and Mank are stylish and pointed. But will the quality of these films matter if no one tunes into this year’s ceremony, which airs Sunday night? (Frankly, it’s not clear how many people even know that.)

There is a real possibility that this year’s show will be a historic bomb. Awards show ratings, from the Grammys to the Golden Globes, have taken a nosedive this year, for obvious and understandable reasons: It’s hard to gin up much enthusiasm for glitz and glamour after a year when everyone has been stuck inside, the offerings have been slim and strange, and the usual glitz and glamour hardly shine through when most of the attendees are attending from their living rooms, via video conference, audio headaches included. Who wants another is-this-thing-muted? Zoom call in his life? 

Thankfully, this years Oscars won’t be Sunday-night conference-call-of-the-stars. Director Steven Soderbergh apparently had it written into his contract that there would be no acceptance speeches from home. Sorry, Room Rater!

Even still, it will be a strange affair. The stars won’t appear from their living rooms, but there will be no live studio audience, and speeches will be delivered at a handful of satellite studios strewn throughout the globe. Instead of a Zoom call, it will be a cable news split screen event, with everyone siloed in their own professionally produced boxes. The movies are supposed to bring people together; in the pandemic year, they’re deliberately keeping people apart. 

In the process, the Academy Awards may separate themselves from viewers as well: If ratings for other recent awards ceremonies are any guide, Sunday’s broadcast could easily lose half its audience versus last year. 

Oscar viewership, of course, has been slowly declining for the better part of two decades. The last peak years were 1998 and 2004, when Titanic and Lord of the Rings: The Return of the King won best picture, respectively. Whatever else you think of those movies—James Cameron’s second-worst film and the third-best Lord of the Rings movie, respectively—they were huge, huge hits. 

This suggests a fairly obvious correlation: People tune into the ceremony when it’s likely to honor movies they’ve seen. And, to reiterate my earlier point, no one—even consumers who claim to be interested in Hollywood’s output—has even heard of the major nominees this year: According to the Guts + Data poll, just 18 percent of respondents were aware of Mank, this year’s most nominated film. 

It’s funny I should mention that movie, since it’s about how Hollywood is smug, cynical, self-absorbed, and completely deluded about its own political relevance and purity. Of all the explanations for why the Oscars has failed to capture the public imagination in recent decades, this is the one that has received the most attention. 

And not without reason either. The films in contention have, by at least one count, become more political in recent years, and it’s probably true that the discussion around those films has become more political, at least for the Extremely Online. Movies have become grist for the culture-war-take mill; I’ve ground out a few in my time. 

And then there are the speeches themselves, about which The New York Times recently had this to say: “Increasingly, the ceremonies are less about entertainment honors and more about progressive politics, which inevitably annoys those in the audience who disagree. One recent producer of the Oscars, who spoke on the condition of anonymity to discuss confidential metrics, said minute-by-minute post-show ratings analysis indicated that ‘vast swaths’ of people turned off their televisions when celebrities started to opine on politics.”

Fair enough. People love to hate celebrity politics; at this point, hating celebrity politics may well be more popular than the Oscars.

But there is something else at work here too. The Oscars have always served—or at least aimed to serve—as a kind of institutional stamp of approval, a declaration, by united Hollywood decree, that this was the best movie, the best actress, the best elaborate Victorian ball gown (I’m sorry, the best costume), the most and loudest sound—and perhaps even the best sound as well. 

The definitiveness of these proclamations, handed down from on high by the mysterious people who make up the Academy, has always been an exaggeration, at best, and more accurately a Hollywood fiction foisted upon the world.

I don’t mean to say that the awards and nominations never went to quality films, but frequently they went to awful or, worse, forgettable material. Green Book is a forgettable feel-good film. Crash, which won best picture in 2006, is no one’s idea of a classic, unless you mean the 1996 David Cronenberg sex-and-auto-wrecks film that wasn’t nominated. Without Googling, does anyone even remember what Lion, nominated for best picture just four years ago, was about? 

Too many Oscar nominees and winners are not only obviously not the year’s best picture, they are obviously not the year’s third-best picture, or even its seventeenth. Add to this the fact that the awards increasingly go to movies few have seen (or—and I must repeat myself—even heard of), and in recent years that fiction has begun to sag. In this pandemic year, in which major films that people might have heard of were largely delayed and theaters were mostly dark, it has collapsed entirely. 

The real problem with the Oscars, then, is not one the pandemic caused but one it revealed: The Academy pretended to know what was best, period. The whole idea was that it knew what was best for everyone. That was a pretense it could keep up when Hollywood was at least trying, on a regular basis, to make movies that had something like universal appeal, and weren’t about superheroes — films, for example, like James Cameron’s second-worst movie and the third-best Lord of the Rings film. Which, again, may not be great movies. But they were movies that practically everyone had heard of, that didn’t feel like sermons, and that everyone—or an awful lot of people—could relate to, somehow or another. 

The third act complicator here is not so much politics, per se, but relevance. The Oscars are an institution, a kind of (privately held) public trust, and its success depends on providing some sort of value to ordinary people outside the bubble of diehards and professional moviemakers and viewers. To succeed, they must demonstrate some value, which means connecting with people who have a choice to tune out, to watch something else, to stream Disney+ or TikTok or chat on Clubhouse or Discord or record themselves killing 553 people with a fish in the video game Hitman 2. Netflix offers a homepage of personally tailored recommendations every day; the Oscars offer a handful of undifferentiated recommendations once a year. 

What this year’s dismal awareness numbers suggest, then, is not only that Hollywood failed in an unusual year to market and P.R.-blitz viewers into some vague sense of what was up for the big awards, but that Hollywood has failed for years and years to supply enough value that potential viewers might seek out those films themselves. Political speeches that turn off viewers are merely an outgrowth of the underlying malady: the Academy Awards have become irrelevant. And the one thing a cultural institution cannot lose is its relevance. 

Movies are supposed to be a distraction from the turmoils of the world, a brief escape from the banality and difficulty of daily life. The whole point of the Oscars, and maybe the whole point of Hollywood, is to try to get people to pay attention—to the movies, to the celebrities, to the filmmakers, to the outfits, and even to the speeches and the causes. But this year, when a lot of people could really have used some form of escapism, they couldn’t even do that. The institution failed. 

from Latest – Reason.com https://ift.tt/3tPwAY0
via IFTTT

The Plutocrats, the People, and the Globalization of World Soccer

As the Volokh Conspiracy’s Special Soccer Correspondent (self-appointed), I spent a good part of the day this past Monday preparing an essay on the proposal, put forward the day before by twelve of Europe’s largest and best-known soccer clubs (including such global giants as Real Madrid, Barcelona, Liverpool, Juventus, and Manchester United), to form a new “European Super League.” I hoped to explain (a) why this would be a true catastrophe for the world of global soccer (and its many, many fans), and (b) why you might want to pay attention to that, even if you care not a whit for soccer (European or otherwise).

As for (a), the bottom line was that, given the rather complicated structure of Europe’s professional soccer leagues, the Super League would drain much of the competitive juice out of many of the thousands of games played across Europe, at all levels of the soccer hierarchy, week in and week out.  A huge swath of games would involve teams with less at stake, less to play for, and less of an incentive to play well, and, as a consequence, much of what makes those games so engaging for so many people would disappear (along with much of the money that sustains the whole system).  All so that a few already-fabulously wealthy clubs could become even more fabulously wealthy.

As for why you might want to pay attention even if you have no interest in anything that takes place on a soccer field … Well, for one thing, while it may not be a matter of life-and-death, many hundreds of millions of people derive considerable pleasure from the global soccer enterprise; I can’t think of many other global activities (popular music?) that deliver more HHUs—Human Happiness Units—than the aggregate of the various institutions and organizations that make up the world of professional soccer. So anything that rocks the foundations of the system through which people experience these activities is an important international development—not on par, surely, with the Covid-19 pandemic, or the fate of Syrian refugees, or climate change, but perhaps just one tier below.

Beyond that, world soccer is also a wonderful case study of how the global telecommunications revolution of the past several decades has created immense new winner-take-all markets, concentrating vast amounts of money in the hands of a small number of superstars, and of the stresses this creates for existing institutional and organizational structures.

Events, though, outpaced me. On Tuesday—a mere 48 hours after having announced the formation of the Super League to great fanfare—the whole thing collapsed, with most of the break-away clubs backing out of the deal.  As one of those clubs (Arsenal) succinctly put it on its official website:  “We made a mistake, and we apologize for it.”

The breathtaking speed of the Super League’s collapse turns out to be an even more interesting story than the floating of the proposal itself. This was, among other things, a big international commercial transaction; tens of billions of dollars were on the line, the lawyers and investment bankers had spent months (and surely millions of dollars) preparing the organizational documents, obtaining financial commitments (reportedly worth $4.5 billion) from broadcasters, organizing for an initial round of public financing, hiring administrative staff, designing a logo, hiring a London PR firm, … And then: POOF!  It was all dismantled. After two days!  It’s as though Toyota and Ford announced that they were merging, and then, two days later, dropped the whole idea.

So what happened during those two days? What happened was that a lot of people—ordinary fans (many of whom, in Europe, are organized into cohesive and vocal supporter’s organizations of many years standing), sports commentators in the media, players and ex-players (some of iconic stature), coaches and ex-coaches (ditto), including many who are themselves affiliated with the Super League clubs who were supposedly the beneficiaries of the new plan—felt as I did about the catastrophic consequences of the proposal, and they took to the streets, to the airwaves, and to the Internet, in furious and ferocious protest.

I highly recommend the reporting by Rory Smith and Tariq Panja in the NY Times, and by Sam Wallace in the London Daily Telegraph to anyone seeking more detailed information about the entire affair.  See, e.g., “Europe’s New Super League, Explained” [the beginning] and “How the Super League Fell Apart” and “Europe’s Elite Suffer Sport’s Most Astounding Humiliation” [the end].

The owners of the break-away clubs, apparently, were completely unprepared for the reaction; with the exception of a single Sunday night appearance on late-night Spanish TV by Real Madrid Chairman Florentino Perez, not one official or spokesperson for the break-away clubs offered a single word in defense of the new plan—not on television, not on Twitter, not on any official club website. They had simply not seen it coming, and when it came they instantly backed down and abandoned ship.

It was a truly ignominious and humiliating retreat—the 2021 version of 1985’s “New Coke” debacle, but with a great deal more at stake, and at warp speed.

You have to ask: Really?! Could they possibly not have seen it coming? What in heaven’s name did they think the reaction would be?! You announce a plan to suck more money out of the system for your own private benefit at the expense, and to the detriment, of everybody else with a stake in the system—and you didn’t think lots of people would object to that?  Did it really not occur to any of the geniuses in the boardrooms at Liverpool FC or Barcelona FC to present an outline of the plan, beforehand, to their players, or to their coaches, or to their supporters’ groups, or their season-ticket holders, just to get a sense of what the reaction might be so that they would be prepared with a response?

Apparently—mind-bogglingly—it did not.

This may be nothing more than proof of the maxim: Just because you’re rich, doesn’t mean that you are smart.

But it might also be more interesting than that. At the risk of overstatement, it was the People vs. the Plutocrats, and the People, for once, emerged victorious. A harbinger of, perhaps, a shift in the balance of power?  Time will tell.

 

from Latest – Reason.com https://ift.tt/3sP7zLk
via IFTTT

The Plutocrats, the People, and the Globalization of World Soccer

As the Volokh Conspiracy’s Special Soccer Correspondent (self-appointed), I spent a good part of the day this past Monday preparing an essay on the proposal, put forward the day before by twelve of Europe’s largest and best-known soccer clubs (including such global giants as Real Madrid, Barcelona, Liverpool, Juventus, and Manchester United), to form a new “European Super League.” I hoped to explain (a) why this would be a true catastrophe for the world of global soccer (and its many, many fans), and (b) why you might want to pay attention to that, even if you care not a whit for soccer (European or otherwise).

As for (a), the bottom line was that, given the rather complicated structure of Europe’s professional soccer leagues, the Super League would drain much of the competitive juice out of many of the thousands of games played across Europe, at all levels of the soccer hierarchy, week in and week out.  A huge swath of games would involve teams with less at stake, less to play for, and less of an incentive to play well, and, as a consequence, much of what makes those games so engaging for so many people would disappear (along with much of the money that sustains the whole system).  All so that a few already-fabulously wealthy clubs could become even more fabulously wealthy.

As for why you might want to pay attention even if you have no interest in anything that takes place on a soccer field … Well, for one thing, while it may not be a matter of life-and-death, many hundreds of millions of people derive considerable pleasure from the global soccer enterprise; I can’t think of many other global activities (popular music?) that deliver more HHUs—Human Happiness Units—than the aggregate of the various institutions and organizations that make up the world of professional soccer. So anything that rocks the foundations of the system through which people experience these activities is an important international development—not on par, surely, with the Covid-19 pandemic, or the fate of Syrian refugees, or climate change, but perhaps just one tier below.

Beyond that, world soccer is also a wonderful case study of how the global telecommunications revolution of the past several decades has created immense new winner-take-all markets, concentrating vast amounts of money in the hands of a small number of superstars, and of the stresses this creates for existing institutional and organizational structures.

Events, though, outpaced me. On Tuesday—a mere 48 hours after having announced the formation of the Super League to great fanfare—the whole thing collapsed, with most of the break-away clubs backing out of the deal.  As one of those clubs (Arsenal) succinctly put it on its official website:  “We made a mistake, and we apologize for it.”

The breathtaking speed of the Super League’s collapse turns out to be an even more interesting story than the floating of the proposal itself. This was, among other things, a big international commercial transaction; tens of billions of dollars were on the line, the lawyers and investment bankers had spent months (and surely millions of dollars) preparing the organizational documents, obtaining financial commitments (reportedly worth $4.5 billion) from broadcasters, organizing for an initial round of public financing, hiring administrative staff, designing a logo, hiring a London PR firm, … And then: POOF!  It was all dismantled. After two days!  It’s as though Toyota and Ford announced that they were merging, and then, two days later, dropped the whole idea.

So what happened during those two days? What happened was that a lot of people—ordinary fans (many of whom, in Europe, are organized into cohesive and vocal supporter’s organizations of many years standing), sports commentators in the media, players and ex-players (some of iconic stature), coaches and ex-coaches (ditto), including many who are themselves affiliated with the Super League clubs who were supposedly the beneficiaries of the new plan—felt as I did about the catastrophic consequences of the proposal, and they took to the streets, to the airwaves, and to the Internet, in furious and ferocious protest.

I highly recommend the reporting by Rory Smith and Tariq Panja in the NY Times, and by Sam Wallace in the London Daily Telegraph to anyone seeking more detailed information about the entire affair.  See, e.g., “Europe’s New Super League, Explained” [the beginning] and “How the Super League Fell Apart” and “Europe’s Elite Suffer Sport’s Most Astounding Humiliation” [the end].

The owners of the break-away clubs, apparently, were completely unprepared for the reaction; with the exception of a single Sunday night appearance on late-night Spanish TV by Real Madrid Chairman Florentino Perez, not one official or spokesperson for the break-away clubs offered a single word in defense of the new plan—not on television, not on Twitter, not on any official club website. They had simply not seen it coming, and when it came they instantly backed down and abandoned ship.

It was a truly ignominious and humiliating retreat—the 2021 version of 1985’s “New Coke” debacle, but with a great deal more at stake, and at warp speed.

You have to ask: Really?! Could they possibly not have seen it coming? What in heaven’s name did they think the reaction would be?! You announce a plan to suck more money out of the system for your own private benefit at the expense, and to the detriment, of everybody else with a stake in the system—and you didn’t think lots of people would object to that?  Did it really not occur to any of the geniuses in the boardrooms at Liverpool FC or Barcelona FC to present an outline of the plan, beforehand, to their players, or to their coaches, or to their supporters’ groups, or their season-ticket holders, just to get a sense of what the reaction might be so that they would be prepared with a response?

Apparently—mind-bogglingly—it did not.

This may be nothing more than proof of the maxim: Just because you’re rich, doesn’t mean that you are smart.

But it might also be more interesting than that. At the risk of overstatement, it was the People vs. the Plutocrats, and the People, for once, emerged victorious. A harbinger of, perhaps, a shift in the balance of power?  Time will tell.

 

from Latest – Reason.com https://ift.tt/3sP7zLk
via IFTTT

Senate Passes Anti-Asian Hate Crimes Bill That Doesn’t Prohibit Discrimination in College Admissions


covphotos127396

The Senate overwhelmingly passed a bill that purportedly combats anti-Asian hate on Thursday. The vote was 94–1.

The bill would create a new position within the Justice Department to review anti-Asian hate crimes related to the COVID-19 pandemic. It also requires the Department of Health and Human Services to issue guidance on preventing anti-Asian discrimination.

“There has been a dramatic increase in hate crimes and violence against Asian-Americans and Pacific Islanders,” the bill asserts. (It explicitly names the Atlanta spa killings as an example of this, though it’s not actually clear the shooter was motivated by anti-Asian animus.)

The lone dissenter on the vote was Sen. Josh Hawley (R–Mo).

“As a former prosecutor, my view is it’s dangerous to simply give the federal government open-ended authority to define a whole new class of federal hate crime incidents,” said Hawley in a statement.

He has a point, though this bill is not particularly vast or sweeping. The stronger argument against the bill is that it does nothing to address one of the most obvious—and odious—forms of anti-Asian discrimination: college admissions.

Many elite colleges, public universities, and even selective high schools explicitly discriminate against Asian applicants in order to artificially tinker with the racial makeup of the campus population. This means that Asian students whose grades and test scores would have gained them admission had they been white, black, or Hispanic are routinely turned away. Contrary to popular belief, the biggest beneficiaries of these schemes are often white students.

Courts have generally held that race-based admissions do not violate civil rights law if they are very narrowly tailored. But Congress could explicitly require educational institutions that receive federal dollars to cease discriminating against Asian applicants. (They could even call it an antiracist initiative.)

Sen. Ted Cruz (R–Texas) proposed an amendment to the bill along these lines, but it was defeated in a close vote: 48–49. Thus the version that passed the Senate aims to tackle anti-Asian hatred, but is silent on perhaps the most common and systemic form of anti-Asian bigotry in the U.S.


FREE MINDS

In Quillette, Jonathan Kay wonders whether “long COVID-19” is the new gender dysphoria:

In the case of COVID-19, much attention has been focused on conspiracy theorists and lay quacks who claim the disease is a fraud. But there is also a pseudo-scientific movement that seeks to present its adherents as sufferers of a condition they call “Long COVID.”

As McMaster University psychiatrist Jeremy Devine recently wrote in The Wall Street Journal, some COVID-19 patients really do experience long-term effects that linger after the infection has left their body. But he adds that “such symptoms can also be psychologically generated or caused by a physical illness unrelated to the prior infection.” Moreover, he notes that a survey produced by Body Politic Covid-19 Support Group, a prominent driver of the Long COVID idea, indicates that “many of the survey respondents who attributed their symptoms to the aftermath of a COVID-19 infection likely never had the virus in the first place. Of those who self-identified as having persistent symptoms attributed to COVID and responded to the first survey, not even a quarter had tested positive for the virus. Nearly half (47.8%) never had testing and 27.5% tested negative for COVID-19. Body Politic publicized the results of a larger, second survey in December 2020. Of the 3,762 respondents, a mere 600, or 15.9%, had tested positive for the virus at any time.”…

In his WSJ article, Levine reports on another interesting connection: Body Politic, which had organized the surveys as a means to promote the idea of Long COVID as a real medical phenomenon, describes itself as “a queer feminist wellness collective.” The group was created in 2018, according to its website, “to create space for inclusivity, accessibility, and crucial discussions about the very real connection between wellness, politics, and personal identity.” Given this, readers may not be surprised at all to learn that the group’s programming “has been highly successful with a millennial and Gen Z audience, largely comprised of women and LGBTQ+ identifying folks.”

More here.


FREE MARKETS

President Joe Biden wants to massively raise taxes to pay for education and child care. He has proposed a capital gains tax of 39.6 percent, which is significantly higher than the current rate of 20 percent. According to CNBC:

The capital gains tax is especially important to Wall Street since it dictates how large a chunk of an equity sale is collected by the federal government. The White House declined to comment.

The proposal would make good on Biden’s campaign promise to require America’s wealthiest households to contribute more as a percentage of their income. This plan would bring the capital gains tax rate and the top individual income tax rate, currently at 37%, to near parity.

The markets were not thrilled about the news:

U.S. equity markets turned sharply lower Thursday afternoon following a report that the Biden administration is mulling increasing the capital gains tax.

The Dow Jones Industrial Average fell over 321 points, or 0.94%, while the Nasdaq Composite and S&P 500 declined 0.92% and 0.94%, respectively.

The details will be formally unveiled next week as part of Biden’s American Family Plan, which is expected to cost more than $1 trillion. Biden will have to sell the bill to crucial Democratic swing voters like Sens. Joe Manchin (W. Va.) and Kyrsten Sinema (Ariz.), and it’s possible his doubling of the capital gains tax won’t meet with their approval.


QUICK HITS

• Scientists made progress with a malaria vaccine.

• “After getting a COVID-19 vaccine, women are selling their breast milk online.

• The Marshall Project reports that Alaska’s foster care agency has been stealing money from the kids under its purview.

• Gender reveal parties are a national menace.

• Drama within the Democratic Socialists of America:

from Latest – Reason.com https://ift.tt/3tNsXBQ
via IFTTT

Senate Passes Anti-Asian Hate Crimes Bill That Doesn’t Prohibit Discrimination in College Admissions


covphotos127396

The Senate overwhelmingly passed a bill that purportedly combats anti-Asian hate on Thursday. The vote was 94–1.

The bill would create a new position within the Justice Department to review anti-Asian hate crimes related to the COVID-19 pandemic. It also requires the Department of Health and Human Services to issue guidance on preventing anti-Asian discrimination.

“There has been a dramatic increase in hate crimes and violence against Asian-Americans and Pacific Islanders,” the bill asserts. (It explicitly names the Atlanta spa killings as an example of this, though it’s not actually clear the shooter was motivated by anti-Asian animus.)

The lone dissenter on the vote was Sen. Josh Hawley (R–Mo).

“As a former prosecutor, my view is it’s dangerous to simply give the federal government open-ended authority to define a whole new class of federal hate crime incidents,” said Hawley in a statement.

He has a point, though this bill is not particularly vast or sweeping. The stronger argument against the bill is that it does nothing to address one of the most obvious—and odious—forms of anti-Asian discrimination: college admissions.

Many elite colleges, public universities, and even selective high schools explicitly discriminate against Asian applicants in order to artificially tinker with the racial makeup of the campus population. This means that Asian students whose grades and test scores would have gained them admission had they been white, black, or Hispanic are routinely turned away. Contrary to popular belief, the biggest beneficiaries of these schemes are often white students.

Courts have generally held that race-based admissions do not violate civil rights law if they are very narrowly tailored. But Congress could explicitly require educational institutions that receive federal dollars to cease discriminating against Asian applicants. (They could even call it an antiracist initiative.)

Sen. Ted Cruz (R–Texas) proposed an amendment to the bill along these lines, but it was defeated in a close vote: 48–49. Thus the version that passed the Senate aims to tackle anti-Asian hatred, but is silent on perhaps the most common and systemic form of anti-Asian bigotry in the U.S.


FREE MINDS

In Quillette, Jonathan Kay wonders whether “long COVID-19” is the new gender dysphoria:

In the case of COVID-19, much attention has been focused on conspiracy theorists and lay quacks who claim the disease is a fraud. But there is also a pseudo-scientific movement that seeks to present its adherents as sufferers of a condition they call “Long COVID.”

As McMaster University psychiatrist Jeremy Devine recently wrote in The Wall Street Journal, some COVID-19 patients really do experience long-term effects that linger after the infection has left their body. But he adds that “such symptoms can also be psychologically generated or caused by a physical illness unrelated to the prior infection.” Moreover, he notes that a survey produced by Body Politic Covid-19 Support Group, a prominent driver of the Long COVID idea, indicates that “many of the survey respondents who attributed their symptoms to the aftermath of a COVID-19 infection likely never had the virus in the first place. Of those who self-identified as having persistent symptoms attributed to COVID and responded to the first survey, not even a quarter had tested positive for the virus. Nearly half (47.8%) never had testing and 27.5% tested negative for COVID-19. Body Politic publicized the results of a larger, second survey in December 2020. Of the 3,762 respondents, a mere 600, or 15.9%, had tested positive for the virus at any time.”…

In his WSJ article, Levine reports on another interesting connection: Body Politic, which had organized the surveys as a means to promote the idea of Long COVID as a real medical phenomenon, describes itself as “a queer feminist wellness collective.” The group was created in 2018, according to its website, “to create space for inclusivity, accessibility, and crucial discussions about the very real connection between wellness, politics, and personal identity.” Given this, readers may not be surprised at all to learn that the group’s programming “has been highly successful with a millennial and Gen Z audience, largely comprised of women and LGBTQ+ identifying folks.”

More here.


FREE MARKETS

President Joe Biden wants to massively raise taxes to pay for education and child care. He has proposed a capital gains tax of 39.6 percent, which is significantly higher than the current rate of 20 percent. According to CNBC:

The capital gains tax is especially important to Wall Street since it dictates how large a chunk of an equity sale is collected by the federal government. The White House declined to comment.

The proposal would make good on Biden’s campaign promise to require America’s wealthiest households to contribute more as a percentage of their income. This plan would bring the capital gains tax rate and the top individual income tax rate, currently at 37%, to near parity.

The markets were not thrilled about the news:

U.S. equity markets turned sharply lower Thursday afternoon following a report that the Biden administration is mulling increasing the capital gains tax.

The Dow Jones Industrial Average fell over 321 points, or 0.94%, while the Nasdaq Composite and S&P 500 declined 0.92% and 0.94%, respectively.

The details will be formally unveiled next week as part of Biden’s American Family Plan, which is expected to cost more than $1 trillion. Biden will have to sell the bill to crucial Democratic swing voters like Sens. Joe Manchin (W. Va.) and Kyrsten Sinema (Ariz.), and it’s possible his doubling of the capital gains tax won’t meet with their approval.


QUICK HITS

• Scientists made progress with a malaria vaccine.

• “After getting a COVID-19 vaccine, women are selling their breast milk online.

• The Marshall Project reports that Alaska’s foster care agency has been stealing money from the kids under its purview.

• Gender reveal parties are a national menace.

• Drama within the Democratic Socialists of America:

from Latest – Reason.com https://ift.tt/3tNsXBQ
via IFTTT