America’s Second Amendment Sanctuary Movement Is Alive and Well

Counties in Wisconsin, Florida, Virginia, Arizona, and Texas became part of a growing “Second Amendment sanctuary” movement this month.

The phrase “Second Amendment sanctuary” is an umbrella term used to describe a jurisdiction that passes a resolution declaring that restrictive gun control laws another legislative body passes are unconstitutional and will not be enforced there. The concept is an adaptation of the immigration “sanctuary city” movement, in which some cities and counties (and now whole states) generally decline to ask residents about their immigration status or assist the federal government in enforcing immigration laws. The resolutions can vary, but generally, Second Amendment sanctuaries refuse to dedicate resources to enforcing things like “red flag” laws and bans on certain weapons.

“This draws a line in the sand. It doesn’t mince words. And I hope it sends a message to what can be described as the authoritarian control freaks,” said Lake County, Fla., Commissioner Josh Blake (R) when Lake County leaders voted on Nov. 5 to become Florida’s first Second Amendment sanctuary. “[They] see it as their jobs to forcibly disarm their fellow citizens, and with all due respect, that won’t be happening in Lake County,” Blake added, referencing former Democratic presidential candidate Beto O’Rourke’s infamous, “Hell yes, we’re going to take your AR-15, your AK-47,” line.

Second Amendment sanctuaries are still a relatively new phenomenon, but the idea has caught on quickly within the gun community. The first Second Amendment sanctuary county-wide movement emerged last year after the election of Illinois Gov. J.B. Pritzker (D) allowed Democrats to propose sweeping gun reform in the state legislature without fear of their bills getting vetoed. Some of the proposed measures included raising the state’s minimum age for firearm ownership from 18 to 21 and prohibiting civilian ownership of certain types of weapons and body armor. These measures prompted Effingham County in southern Illinois to pass a resolution declaring all of the Democrats’ proposals unconstitutional. Thus was the Second Amendment sanctuary movement born.

Some solidly red states, like Alaska, Idaho, Wyoming, and Kansas, had already passed state-wide measures declaring that the states will not comply with any federal gun laws that they view as unconstitutional before the movement officially began. For example, Idaho’s 2009 resolution specifically opposes the establishment of a federal gun licensing procedure.

But, since Effingham County’s declaration, the movement has gained support in local communities across 19 states. In more liberal states, Second Amendment sanctuary legislation is being passed at the county level, and even the city level, such as in Needles, Calif.

Many proponents of the Second Amendment sanctuary movement cite a cultural disconnect between their cities and their rural communities in deciding to join the movement. When Florence County became Wisconsin’s first sanctuary county on Nov. 12, Florence County Sheriff Dan Miller told the Milwaukee Journal Sentinel that the measure, “sends a message that all of Wisconsin is not exactly the same. We have some different beliefs up north. We tend to be a little more conservative. We like our guns. We believe in God.” Similarly, Virginia gained five new sanctuary counties in the southern, more rural, parts of the state when Democrats took control of the state’s legislature earlier this month and promptly began proposing gun control legislation.

It is important to note that these resolutions do not carry the force of law. As Virginia House of Delegates member Ken Plum (D–Reston) points out, “[t]he notion that you can have a locality void a state law by declaring yourself a sanctuary simply is not going to hold up in court.” But laws, at the end of the day, need somebody to enforce them, and many sheriffs and police officers in counties where Second Amendment sanctuary resolutions have been passed support their communities’ decisions, or at the bare minimum, aren’t willing to stringently enforce the legislatures’ laws. Again, immigration sanctuary cities are a good parallel for understanding this phenomenon, as are state decisions to legalize marijuana despite federal prohibition. State and federal laws cannot be effectively enforced by only state and federal law enforcers. Cops and local sheriffs who make laws they don’t like their lowest priority aren’t doing anything illegal, but their decision effectively nullifies the laws in question for all but the boldest and biggest law-breakers.

While some sheriffs of Second Amendment sanctuaries, like Weld County, Colorado, Sheriff Steve Reams, deny supporting the decision of gun owners to disobey certain state gun laws, they also admit that enforcing these types of laws aren’t their top priorities. Other sheriffs, like Lake County, Florida, Sheriff Peyton Grinnell, “fully” support their county’s decision, suggesting that, at least for now, Second Amendment sanctuaries might be an effective way to resist gun control legislation, and aren’t going anywhere anytime soon.

from Latest – Reason.com https://ift.tt/2Owte97
via IFTTT

The First Amendment and Government Property: Free Speech Rules (Episode 8)

Free Speech Rules: The First Amendment and Government Property

Say the government is handing out money, or access to government property, or some other benefit. Can it exclude certain kinds of speech, or certain kinds of speakers?

It’s complicated, but here are the five rules of the First Amendment and Government Property

Rule 1: A few forms of government property are treated as so-called “traditional public forums.” There, the government generally can’t exclude speech based on its content.

The classic examples are sidewalks and parks, as well as streets used for parades. Unless speech falls within one of the narrow First Amendment exceptions (such as true threats of crime, or face-to-face insults that tend to provoke a fight), the government can’t restrict it. Such places are technically government property; but that gives the government no extra authority to control such speech.

The postal system is analogous. At least since the mid-1940s, the Supreme Court has held that the government can’t exclude certain kinds of content from the mail. To quote Justice Holmes in an early case, “The United States may give up the Post Office when it sees fit,” but until then “the use of the mails is almost as much a part of free speech as the right to use our tongues.”

Rule 2: Sometimes, the government deliberately opens up property or funds in order to promote a wide diversity of private speech, using objective criteria. Many public schools, for instance, let student groups use classrooms that aren’t otherwise being used. Public libraries often offer rooms for meetings of community groups. Public universities might offer free e-mail accounts or web hosting to all students, and sometimes public universities offer money to student groups to publish newspapers or invite speakers.

These are called “limited public forums,” and the government can limit them to particular speakers (for instance, just students), or to particular kinds of speech (for instance, just speech related to the university curriculum). It can also have reasonable, viewpoint-neutral exclusions (for instance, saying that certain benefits or property can’t be used for promoting or opposing candidates for public office). But it can’t impose viewpoint-based criteria—it can’t, for instance, let all groups use a meeting room in a library but exclude racist groups.

Rule 3: A lot of government property is open to the public, but not for speech. Airports, for instance, are set up to promote transportation, not speaking; but people there will wear T-shirts with messages on them, talk to friends, maybe even approach strangers with leaflets. In these so-called “nonpublic forums,” the rule is much like in limited public forums: Speech restrictions are allowed, but must be reasonable and viewpoint-neutral.

Rule 4:  Some government property is set up for the government itself to speak; and there, the government can pick and choose what viewpoints it conveys or endorses. The walls of most public buildings are an example; the government can choose what art to put up there, and it might refuse to display art that conveys ideas that it dislikes.

Likewise, when the government spends money to promote its own messages, it doesn’t have to promote rival messages. It can have a National Endowment for Democracy without having to fund a National Endowment for Communism. It can put out ads supporting racial equality, without paying for ads supporting racism.

Sometimes there are close cases; for instance, when Texas authorized many kinds of license plate designs, but excluded Confederate flag designs, the Supreme Court split 5-to-4. The majority thought license plate designs were government speech, and the government could pick and choose which ones to allow, even when the government accepted dozens of designs requested by private groups. The dissent thought they were a limited public forum, in which viewpoint discrimination was forbidden because the government was supporting so many different (and often contradictory) forms of speech. But while there are close cases, many are pretty clear: The government often clearly promotes views it chose itself, and sometimes clearly promotes a wide range of private views.

Rule 5: Similar principles likely apply to government benefit programs, and not just to the provision of real estate or of money. Charitable tax exemptions, for instance, are likely a form of limited public forum: The government can discriminate based on content (you can’t use tax-deductible donations to support or oppose candidates for office), but not based on viewpoint.

Likewise, the Supreme Court held that the government can’t deny full trademark protection to trademarks that are seen as “disparaging,” “scandalous,” “immoral,” or racist. Such restrictions, the Court said, were impermissibly viewpoint-based.

Of course, private property owners aren’t bound by the First Amendment, whether they’re distributing money or access to real estate. And, as we see, the government as property owner isn’t bound by the First Amendment quite the same as it is when deciding whether to jail or fine them for their speech. But, except when it comes to the government’s own speech, viewpoint discrimination is generally forbidden even on government property.

So to sum up:

The government generally can’t exclude speech based on its content in “traditional public forums.”

The government can deliberately open up “limited public forums,” that are restricted to particular speakers or kinds of speech, but it can’t impose viewpoint-based criteria.

In “nonpublic forums,” speech restrictions are allowed, but must be reasonable and viewpoint-neutral.

For government property set up for the government itself to speak, the government can pick and choose what viewpoints it conveys or endorses.

Similar principles likely apply to government benefit programs, and not just the use of physical property.

Written by Eugene Volokh, who is a First Amendment law professor at UCLA.
Produced and edited by Austin Bragg, who is not.

This is the eighth episode of Free Speech Rules, a video series on free speech and the law. Volokh is the co-founder of The Volokh Conspiracy, a blog hosted at Reason.com.

This is not legal advice.

If this were legal advice, it would be followed by a bill.

Please use responsibly.

Music: “Lobby Time,” by Kevin MacLeod (Incompetech.com) Licensed under Creative Commons: By Attribution 3.0 License http://creativecommons.org/licenses/b

from Latest – Reason.com https://ift.tt/2XzaWrO
via IFTTT

A Utah Woman Faces the Sex Offender Registry for Going Topless in Front of Her Stepkids

A Utah stepmother might land on the sex offender registry for baring her breasts in her own home.

It’s not clear when exactly the incident took place. (One recollection puts it in the fall of 2016, while another has it in late 2017 or early 2018.) According to the stepmother, Tilli Buchanan of West Valley City, it happened after she and her husband installed insulation in their garage. Upon finishing, the couple returned to the main part of the house and removed their itchy clothes. At that point Buchanan’s stepchildren walked downstairs and saw the couple shirtless. To ease their embarrassment, Buchanan then attempted to explain that her being topless was not inherently sexual and compared it to them seeing their father’s bare chest.

Prosecutors tell a different story. They say Buchanan purposefully took her shirt off in front of her stepchildren while under the influence of alcohol, then told her husband that she would only put her shirt back on if she saw his penis.

Just how exactly did law enforcement become aware of the private moment in the first place? The Salt Lake Tribune reports that the mother of Buchanan’s stepchildren heard about the incident and was “alarmed” enough to report it to the Division of Child and Family Services. Earlier this year, a police detective called Buchanan to ask about it.

Buchanan now faces three misdemeanor charges for lewdness involving a child, which Utah statute 76-9-702.5 defines as exposing “genitals, the female breast below the top of the areola, the buttocks, the anus, or the pubic area.” The statute applies in public spaces, and in private spaces “under circumstances the person should know will likely cause affront or alarm or with the intent to arouse or gratify the sexual desire of the actor or the child.” If convicted, Buchanan will be placed on the sex offender registry for 10 years.

Buchanan’s husband, who by all accounts was at the same level of undress, has escaped legal consequences. The American Civil Liberties Union of Utah, which appeared in court to support Buchanan this week, argues that this disparity in the lewdness statute violates the Constitution’s Equal Protection Clause. “We want people to be treated equally,” says Leah Farrell, a senior staff attorney with the group. “When the state and criminal justice system are involved, we have to scrutinize our personal feelings about what morality should be and what is simply criminalizing someone’s body because of their gender.”

The case is expected to receive a ruling within the next two months.

from Latest – Reason.com https://ift.tt/2s63Bo0
via IFTTT

3-Year-Old Dies in Freak Escalator Accident, Police Charge Mom With Child Abuse

Jiterria Lightner and her three kids, ages 4, 3 and 2, were at the airport in Charlotte, Virginia, on their way home from a trip to Florida. While Lightner sat less than 15 feet away, trying to arrange a ride, her kids were playing in a little space between the escalator and the stairs.

In the freakiest of freak accidents, 3-year-old Jaiden took hold of the railing, was carried up the escalator, and then fell to his death, reports WCNC.

The tragedy was originally deemed an accident. But the Charlotte-Mecklenburg Police Department decided this week to take out three misdemeanor warrants against the mom, charging her with child abuse. If she is found guilty, she could face a maximum of 150 days in jail.

Of course, if this was really about a mom not supervising her kids, how does taking her away from them for 150 days make things better? Obviously, it doesn’t. That’s why I don’t think it’s really about a lack of supervision. I think it’s about fear. The fact that this truly could happen to any of us is so scary, we can’t deal with it. So instead, we—or at least the Charlotte-Mecklenburg police—pretend that no, this only happens to terrible parents who are criminally abusive. Not to saintly you and me.

It echoes the way we used to blame rape victims: She was asking for it by wearing that outfit. I would never be raped because I don’t ask for it. Our fear made us twist the victim into the perp, or at least the accomplice.

Our fear that something this horrific could happen out of the blue (at the end of a vacation, even) seems to turn a normal person in a normal circumstance into the depraved author of her own grief. If she’s a terrible mom then this tragedy serves her right and the universe still seems fair. We can breathe a sigh of relief.

Except we can’t. Not when the authorities can pretend bad things only happen to bad people.

When that is society’s assumption, parents feel compelled to helicopter. They know they cannot count on sympathy and support if, God forbid, an unpredictable tragedy occurs. Remember the mom whose child fell into the gorilla enclosure? Surely that was as unpredictable as this sad airport story. And yet, many people reacted as if all moms should be on high alert anytime their child is at the zoo, because it is so darn common and so very likely that their kids could fall into a cage. Hindsight, fear, and a deep unwillingness to recognize the fickleness of fate combined into a storm of hate and victim-blaming.

As lawyer Greene put it: “This is one of those incidents that could’ve happened to any one of the members of this community, and, unfortunately, the decision came down to charge her with a crime.”

Unfortunate indeed. And chilling.

from Latest – Reason.com https://ift.tt/2XHOjSc
via IFTTT

With This Forfeiture Trick, Innocent Owners Lose Even When They Win

Critics of civil forfeiture, the system of legalized theft that allows law enforcement agencies to seize people’s property by alleging it is connected to criminal activity, often focus on the burden of proof the government faces when owners try to recover their assets. While those standards are obviously important, nearly nine out of 10 federal forfeiture cases never make it to court, largely because mounting a challenge often costs more than the property is worth. And while the Civil Asset Forfeiture Reform Act (CAFRA) allows owners who win in court to recover “reasonable attorney fees and other litigation costs,” prosecutors can defeat that safeguard by dragging out cases and then dropping them before a judge decides whether forfeiture is legally justified.

In the meantime, desperate owners may decide to let the government keep some of their property, even when they are completely innocent. From the government’s perspective, there is no downside. “By gaming the system and denying property owners a ‘win’ in court,” says Institute for Justice (I.J.) senior attorney Dan Alban, “federal prosecutors have found a way to short-circuit judicial oversight of their activities, while at the same time preserving their ability to continue to abuse Americans’ property rights.”

I.J. is asking the U.S. Supreme Court to consider a case that takes aim at such sneaky tactics, arguing that an owner can “substantially prevail” in a forfeiture battle, as required by the CAFRA provision dealing with attorney fees, even if the government returns the property before it officially loses in court. “The threat of paying attorneys’ fees is a critical check on government abuse,” observes Justin Pearson, another I.J. senior attorney. “Otherwise, there is no disincentive to stop prosecutors from filing frivolous civil forfeitures against property belonging to innocent owners.”

The I.J. case involves Miladis Salgado, a Florida woman whose home was searched in 2015 based on a tip that her estranged husband was a drug dealer. Although that tip proved to be unfounded, Drug Enforcement Administration (DEA) agents found $15,000 in cash that belonged to Salgado, which they seized. Salgado hired a lawyer to challenge the forfeiture on a contingency fee basis, agreeing to pay a third of any money she recovered.

The case dragged on for two years, and the government dropped it just as a federal judge was about to rule on Salgado’s motion for summary judgment. Since the DEA admitted it had no evidence implicating Salgado in criminal activity, it seems likely that she would have prevailed, which explains why the government suddenly agreed to return her money. But now instead of her original $15,000, she had only $10,000, since she had to pay her lawyer.

When Salgado asked the court to make the government cover that cost, U.S. District Judge Darrin Gayles ruled that she was not entitled to attorney fees under CAFRA, since the case had been dismissed without prejudice, meaning it theoretically could be refiled. “A dismissal without prejudice cannot trigger the statutory entitlement,” he concluded, “because such a dismissal lacks the necessary ‘material alteration of the legal relationship of the parties’ with a corresponding ‘judicial imprimatur on the change.'” Last July the U.S. Court of Appeals for the 11th Circuit upheld that decision.

The Institute for Justice is now asking the Supreme Court to resolve two questions: When does an owner “substantially prevail” in a forfeiture challenge, triggering an award of attorney fees, and does a judge have the discretion to dismiss forfeiture claims without prejudice when “the court has ordered the United States to return the seized money and the lawsuit will never be refiled”?

The institute’s petition argues that the 11th Circuit erred by reading the word substantially out of CAFRA, limiting its analysis to the question of whether Salgado was the “prevailing party.” I.J. notes that the district court not only ordered the government to return Salgado’s money but said the government would be on the hook for her legal fees if it decided to refile the case. “Ms. Salgado substantially prevailed,” I.J. says. “She obtained the full return of her money, and she even obtained a court order inhibiting the United States from refiling the civil forfeiture lawsuit.”

To conclude otherwise, the petition says, would deprive innocent owners of the protection CAFRA was supposed to provide. According to the House report on the bill, Congress wanted to “give owners innocent of any wrongdoing the means to recover their property and make themselves whole after wrongful government seizures.” An innocent owner who loses a third of her property to legal fees imposed on her by a wrongful forfeiture action plainly has not been “made whole.”

The House report on CAFRA also noted that “many civil seizures are not challenged” because of the costs owners must pay on “the arduous path one must journey” to contest them, “often without the benefit of counsel, and perhaps without any money left after the seizure with which to fight the battle.” If owners are forced to pay those costs even when the government effectively concedes their innocence, the remedy provided by CAFRA has been nullified.

I.J. also wants the Supreme Court to resolve a circuit split on the question of whether judges may dismiss forfeiture claims without prejudice in cases like this. Disagreement on that point among federal appeals courts, it says, has “resulted in widely divergent outcomes in district courts across the country.” The position endorsed by the 11th Circuit in Salgado’s case “presents a catch-22” for victims of forfeiture abuse, the petition says: “In order for an innocent owner to be awarded attorneys’ fees under CAFRA, the government’s case against the money or property cannot be dismissed without prejudice. But the innocent owner cannot prevent the case from being dismissed without prejudice because, in these circuits, their right to be awarded attorneys’ fees has not yet vested.”

In this case, as in its many other challenges to forfeiture abuse, I.J. is calling attention to the way the system works in practice, showing that even well-intended safeguards can be defeated by coercive tactics that deprive innocent people of their property. “Seizing someone’s property and forcing them to hire an attorney for two years to get it back has real costs,” Pearson says. “The government can’t take your property, keep it for years, and then suddenly give it back and pretend like nothing happened.”

So far, of course, the government can do exactly that. The Supreme Court can put a stop to it by taking up this case.

from Latest – Reason.com https://ift.tt/2O8bdip
via IFTTT

14-Year-Old Faces Felony Hate Crime Charges for Posting ‘Slave for Sale’ Craigslist Ad

Authorities arrested a 14-year-old white male student at Naperville High School in Naperville, Illinois, and charged him with committing a hate crime.

What the teen did was genuinely bad: He took a picture of a black classmate and posted a “slave for sale” ad on Craigslist. The school suspended him, and it was right to do so.

But now the police are involved, and the teen faces two felony hate crime charges as well as a misdemeanor disorderly conduct charge.

The teen was in court on Wednesday, according to the Chicago Tribune:

Prosecutors called the allegations “serious and aggravating,” and said the alleged actions put the victim’s safety at risk. The hate crime counts are juvenile felonies and the disorderly charge is a misdemeanor.

[Defense attorney Harry] Smith said the student is serving an in-school suspension and his client and the victim have a meeting scheduled before the school principal where the youth will formally apologize. Smith described the pair as friends.

State’s Attorney Robert Berlin issued a statement Wednesday in which he called the allegations “beyond disturbing.”

“Hate crimes have no place in our society and will not be tolerated in DuPage County,” Berlin said. “Anyone, regardless of age, accused of such disgraceful actions will be charged accordingly.”

For the authorities to charge someone with a hate crime, there must be an underling crime. Simply holding or expressing hateful views is not illegal—indeed, it is protected by the First Amendment. Prosecutors can consider hate crime charges only when hate is the motivating factor in the commission of a crime, such as assault or vandalism.

Since disorderly conduct is the only other item here, the hate crime charges presumably stem from that. Disorderly conduct is often a broad category of offense, and such is certainly the case under Illinois law: “A person commits disorderly conduct when he or she knowingly does any act in such unreasonable matter as to alarm or disturb another and to provoke a breach of the peace.” The disorderly conduct charge is a misdemeanor, but the hate crime charges are felonies, making this an extremely serious criminal matter for a 14-year-old kid.

I don’t know what was going through his head when he posted the Craigslist ad—news article suggest the two boys were former friends—and I do not object at all to the school itself taking punitive action. But should the cops really be arresting 14-year-olds, and subjecting them to life-derailing felony charges, for incidents of nonviolent bullying? School is supposed to teach young people to behave responsibly, not shuffle them into the criminal justice system at the first sign of trouble. This is far too harsh an outcome, and it shows one of the dangers of having hate crime laws on the books at all: They give cops more opportunities to overcharge.

from Latest – Reason.com https://ift.tt/35n3HWv
via IFTTT

Judge Puts Plans to Restart Federal Executions on Hold

Attorney General William Barr’s plans for the first federal executions in more than 15 years have been temporarily suspended, thanks to a conflict over the methods the Justice Department intends to use.

In July, Barr announced that the Justice Department would get back in the execution business. Though the federal death penalty was reinstated in 1998, there have been no federal executions since 2003.

Barr had scheduled five executions for December and January, each for men convicted of murder. When he announced his plan, the Justice Department said it would be performing lethal injections with pentobarbital, a drug that has been a focus of some criticism because of some botched executions.

But he Federal Death Penalty Act requires that executions be carried out in “the manner prescribed by the laws of the State within which the sentence is imposed.” The only exception is if that state does not currently have an execution protocol. In such a case, the Justice Department is to pick another state’s execution method to mimic.

Four of the five men scheduled to be executed have filed suit, arguing that the Justice Department is not following the law by deciding to use pentobarbital rather than the home state’s protocols. Daniel Lewis Lee, the first man scheduled for execution (despite opposition from the family of his victims), was convicted and sentenced to death in Arkansas, which uses a three-drug execution combination that has its own problems and faces its own legal challenges.

Yesterday evening, Judge Tanya S. Chukatan of the U.S. District Court for the District of Columbia ruled on behalf of the men on death row. She has temporarily enjoined the Justice Department from carrying out the executions as planned, noting that “the public is not served by short-circuiting legitimate judicial process, and is greatly served by attempting to ensure that the most serious punishment is imposed lawfully.”

This is a technical ruling that will not ultimately prevent the Justice Department from eventually resuming executions. But because this ruling is about the protocols themselves, conflict about how the states execute prisoners and what drugs they use may create unexpected logistical challenges for the executioners. States such as Arkansas are facing legal challenges arguing that drugs they use to execute prisoners violate the Eighth Amendment prohibition against “cruel and unusual punishment.” Drug companies are increasingly reluctant to supply these drugs for this purpose (unless the states keep their drug sources secret, as Arkansas and Missouri have done). Forcing the Justice Department to follow other states’ protocols may not be as simple as switching to the “right” drug.

Read the ruling for yourself here.

from Latest – Reason.com https://ift.tt/2KCQ5P9
via IFTTT

What Climate Science Tells Us About Temperature Trends

This article expands on claims about global temperature trends made in Ronald Bailey’s article in the January 2020 issue of Reason, “Climate Change: How Lucky Do You Feel?” for readers who are keen to dive deeper into the topic. (The print article is currently only available to subscribers.)

I began my time on the climate change beat as a skeptic. After attending the 1992 Earth Summit in Rio de Janeiro where the United Nations Framework Convention on Climate Change was negotiated, I noted in a Reason article that by signing the treaty “United States is officially buying into the notion that ‘global warming’ is a serious environmental problem” even while “more and more scientific evidence accumulates showing that the threat of global warming is overblown.” I was simply unconvinced that the available data demonstrated the need for the kind of radical intervention activists were proposing.

But I stayed on the beat, closely following the progress of scientific study and policy debate. By 2005, following significant corrections to the satellite data record, I declared in Reason that “We’re All Global Warmers Now.” And in 2006 I concluded that “I now believe that balance of evidence shows that global warming could well be a significant problem.”

In the years since 2007, I have remained largely sanguine, joining the many who noted that global temperature at the beginning of this century rose at a considerably lower rate than that projected by computer climate models. I was generally persuaded by researchers who predicted a sedate pace of increase, with temperatures unlikely to rise much above 1.5 degrees Celsius over the 19th century average. In this scenario, the world might get a bit warmer, but people and societies have proven themselves up to the task of adapting to such changes in the past and fundamentally the process of lifting hundreds of millions of poor people out of abject poverty through technological progress and economic growth fueled by coal, gas, and oil can safely continue unabated.

But as research continued, a number of possible scenarios have emerged. For example, some people read the scientific evidence as suggesting that man-made climate change is not greatly impacting people now, but might become a bigger problem toward the end of this century. Basically current weather—droughts, rainstorms, snowfall, and hurricanes––cannot now be distinguished from natural variations in climate. However, as the temperature increases computer climate models project that future droughts will last longer, rainstorms fiercer, snowfall less, and hurricanes stronger. In addition, coastal flooding of major cities will become more common as sea level rises. These changes in climate will put the property and lives of children and grandchildren at greater risk. Computer models combining climate and economic components calculate that endeavoring now to slow warming would cost about the same as later efforts to adapt to a somewhat hotter world. Let’s call this the somewhat worried scenario.

Another set of people note that temperature increases have apparently resumed a steady march upwards after a slow-down at the beginning of this century. They parse the results of recent studies that conclude that climate change is already causing deleterious impacts, e.g., heat waves both on land and in the oceans are becoming more common, the extent of Arctic sea ice is steeply declining, and glaciers, ice sheets, and permafrost are melting. The sanguine conclusion that future warming will proceed slowly and not rise much above 1.5 degrees Celsius by the end of this century appears to be too optimistic. If greenhouse gas emissions continue unabated, average global temperature looks to be on track to reach 1.8 degrees Celsius in 50 years and continue rising beyond 2 degrees Celsius by 2100. This trajectory significantly increases the risk that things could go badly wrong. This is the really worried scenario.

Spurred by current alarums, I spent the summer reading and reviewing recent findings of climate science to see if my belief that the somewhat worried scenario as the more likely outcome remains justified. Climate science is a massive enterprise involving research into a vast array of topics including atmospheric physics, ocean and atmospheric currents, solar irradiance, adjustments in temperature records, the effects of atmospheric aerosols, how forests and fields react to rising carbon dioxide, trends in cloudiness, heat storage in the deep oceans, changes in glaciers and sea ice, to name just a few. A simple Google Scholar search using the terms climate change and global warming returns more than 2.6 and 1.7 million results each. Just searching glaciers and climate change returns 124,000 results.

Researchers use complicated computer climate models to analyze all these data to make projections about what might happen to the climate in the future. My reporting strategy has been to take seriously what I believe to be the principal objections made by researchers who argue on scientific grounds that panic is unwarranted. I also assume that everyone is acting in good faith. What follows is based on what I hope is a fair reading of the recent scientific literature on climate change and communications with various well-known climate change researchers.

Ice Age Climate Change 

To decide how worried we should be, we need to go back much further than 1992. Starting about 2.6 million years ago the Earth began experiencing ice ages lasting between 80,000 and 120,000 years. The world’s most recent glacial period began about 110,000 years ago.

Most researchers believe that variations in Earth’s orbital path around the Sun is the pacemaker of the great ice ages. Ice ages end when wobbles in Earth’s orbit increase the sunlight heating the vast continental glaciers that form in the northern hemisphere. These orbital shifts initiate a feedback loop in which the warming oceans release of large amounts of carbon dioxide into the atmosphere which in turn further boosts global temperatures. Higher temperatures increase atmospheric water vapor which further boosts warming that melts more ice and snow cover. Less snow and ice enables the growth of darker vegetation which absorbs more heat and so forth. 

At the height of the last glacial maximum 19,000 years ago atmospheric concentrations of carbon dioxide stood at only about 180 parts per million. The level of atmospheric carbon dioxide increased to around 280 parts per million by the late 18th century. This chain of feedbacks eventually produced a rise in global average surface temperature of about 4 degrees Celsius. That’s the difference between the last ice age in which glaciers covered about one-third of the Earth’s total land area and today when only 10 percent of the land area is icebound. 

As a result of human activities, the level of carbon dioxide in the atmosphere has risen to about 415 parts per million now. The annual rate of increase in atmospheric carbon dioxide during the past 60 years is about 100 times faster than the rate of increase that occurred at the end of the last ice age. How much this increase is responsible for having warmed the planet over the last century, along with how much more warming will result if carbon dioxide concentrations continue to rise, is the central issue in climate change science. 

Just Add Carbon Dioxide

Of course, the sun powers the Earth’s climate. About 30 percent of solar energy is directly reflected back into space by bright clouds, atmospheric particles, and sea ice and snow. The remaining 70 percent is absorbed. The air and surface re-emit this energy largely as infrared rays that are invisible to us but we feel as heat.

The nitrogen and oxygen molecules that make up 99 percent of the atmosphere are transparent to both incoming sunlight and outgoing infrared rays. However, water vapor, carbon dioxide, methane, nitrous oxide, and ozone are opaque to many wavelengths of infrared energy. These greenhouse gas molecules block some escaping heat and re-emit it downward toward the surface. So instead of the Earth’s average temperature being 18 degrees Celsius below zero, it is 15 degrees Celsius above freezing. This extra heating is the natural greenhouse effect.

NASA climate researcher Andrew Lacis and his colleagues contend that carbon dioxide is the key to greenhouse warming on Earth. Why? Because at current temperatures carbon dioxide and other trace greenhouse gases such as ozone, nitrous oxide, methane, and chlorofluorocarbons do not condense out of the atmosphere. Overall, these noncondensing greenhouse gases account for about 25 percent of the Earth’s greenhouse effect. They sustain temperatures that initiate water vapor and cloud feedbacks that generate the remaining 75 percent of the current greenhouse effect. Lacis and his colleagues suggest that if all atmospheric carbon dioxide were somehow removed most of the water vapor would freeze out and the Earth would plunge into an icebound state.

Princeton physicist and lately resigned Trump administration National Security Council member William Happer has long questioned the magnitude of carbon dioxide’s effect with respect to warming the atmosphere. In fact, Happer is the co-founder and former president of the nonprofit CO2 Coalition established in 2015 for the “purpose of educating thought leaders, policy makers, and the public about the important contribution made by carbon dioxide to our lives and the economy.”His 2014 article, “Why Has Global Warming Paused?” in the International Journal of Modern Physics A, Happer argued that climate scientists had gotten crucial spectroscopic details of how atmospheric carbon dioxide absorbs infrared energy badly wrong. As a result, he asserts, a doubling of atmospheric carbon dioxide would likely warm the planet by only about 1.4 degrees Celsius. If the effect of carbon dioxide on temperatures was indeed constrained to that comparatively low value man-made global warming would probably not constitute a significant problem for humanity and the biosphere. 

In 2016, NASA Langley Research Center atmospheric scientist Martin Mlynczak and his colleagues analyzed Happer’s claims in a Geophysical Research Letters article and found, “Overall, the spectroscopic uncertainty in present-day carbon dioxide radiative forcing is less than one percent, indicating a robust foundation in our understanding of how rising carbon dioxide warms the climate system.” In other words, the details of how carbon dioxide absorbs and re-emits heat are accurately known and unfortunately imply that future temperatures will be considerably higher than Happer calculated them to be. 

 Another related claim sometimes made is the effect of carbon dioxide on the climate is saturated, that is, the amount of carbon dioxide in the atmosphere is already absorbing re-emitting about as much heat as it can. Consequently, increasing the amount of carbon dioxide in the atmosphere won’t much increase the average temperature of the globe. But is this so? 

This claim is based on the fact in the current climate era that, as Princeton University climatologist Syukuro Manabe in a 2019 review article “Role of greenhouse gas in climate change,” notes, “surface temperature increases by approximately 1.3 degrees C in response to the doubling of atmospheric CO2 concentration not only from 150 ppm [parts per million] to 300 ppm but also from 300 ppm to 600 ppm.” To get a further increase of 1.3 degrees Celsius would require doubling atmospheric CO2 concentration to 1200 ppm. A metaphorical way of thinking about this issue is to visualize that the atmosphere consists of layers and as each layer fills up with enough carbon dioxide to absorb all the heat that it can, the extra heat radiates to the next layer that then absorbs it and re-emits it, and so forth. Consequently, the effect of CO2 on temperatures does decline but it does not saturate at levels relevant to future climate change. 

Again, an increase of 1.3 degrees Celsius due to doubling carbon dioxide doesn’t seem too alarming. “It is much smaller than 2.3 degrees C that we got in the presence of water vapour feedback,” notes Manabe. Researchers find under current climate conditions that “water vapour exerts strong a positive feedback effect that magnifies the surface temperature change by a factor of ∼1.8.” A warmer atmosphere evaporates and holds more water vapor which again is the chief greenhouse gas. Just as predicted, water vapor in the atmosphere is increasing as average global temperatures rise. Citing satellite data, a 2018 article in Earth and Space Science reported, “The record clearly shows that the amount of vapor in the atmosphere has been increasing at a rate of about 1.5% per decade over the last 30 years as the planet warms.”

Evidence Tampering?

Researchers have devised various records to track changes in global average temperatures. These include surface records incorporating thermometer readings on land and at sea; remote sensing of atmospheric trends using satellites, and climate reanalyses to calculate temperature trends for two meters above the surface. 

All temperature records must be adjusted since all have experienced changes that affect the accuracy of their raw data. For example, surface temperature records are affected by changes in thermometers, locations of weather stations, time of day shifts in measurements, urban heat island effects, shipboard versus buoy sampling and so forth. Satellite data must be adjusted for changes in sensors and sensor calibration, sensor deterioration over time, and make corrections for orbital drift and decay. Climate reanalysis combines weather computer models with vast compilations of historical weather data derived from surface thermometers, weather balloons, aircraft, ships, buoys, and satellites. The goal of assimilating and analyzing these data is to create past weather patterns in order to detect changes in climate over time. Since climate reanalyses incorporate data from a wide variety of sources they must be adjusted when biases are identified in those data.

Some skeptics allege that the official climate research groups that compile surface temperature records adjust the data to make global warming trends seem greater than they are. A recent example is the June 2019 claim by geologist Tony Heller, who runs the contrarian website Real Climate Science, that he had identified “yet another round of spectacular data tampering by NASA and NOAA. Cooling the past and warming the present.” Heller focused particularly on the adjustments made to NASA Goddard Institute for Space Studies (GISS) global land surface temperature trends. 

One general method used by climate scientists of adjust temperature records, explains Berkeley Earth climate data scientist Zeke Hausfather (now at Breakthrough Institute) is statistical homogenization. Researchers compare each weather station to all of its nearby neighbors and look for changes that are local to one station, but not found at any others in the area. A sharp sustained jump to either lower or higher temperatures at a particular station generally indicates a change such as a shift in location or a switch in instrumentation. The records of such out-of-line stations are then adjusted to bring it back in line with its neighboring stations. 

In general, temperatures increase more rapidly over land compared to the oceans because of the oceans’ greater capacity to absorb heat and ability to get rid of extra heat through evaporation. Heller is right that raw land station adjustments by NOAA/NASA have increased overall land warming by about 16 percent between 1880 and 2016. On the other hand, NOAA/NASA adjustments of raw sea temperature data to take account of the shift from measuring ocean temperatures using buckets and intakes aboard ships to a widely deployed network of automatic buoys reduced the amount of warming in past. The adjustments result in about 36 percent less warming since 1880 than in the raw temperature data. When taken together the NOAA/NASA adjustments to land and ocean data actually reduce, rather than increase, the trend of warming experienced globally over the past century. Adjustments that overall reduce the amount of warming seen in the past suggest that climatologists are not fiddling with temperature data in order to create or exaggerate global warming. 

It’s Definitely Getting Hotter 

The latest global temperature trends are compiled in the State of the Climate in 2018 report published in August 2019 by the American Meteorological Society. Since 1979, the surface records from NASA’s Goddard Institute for Space Studies (GISS) report an increase of +0.18 C per decade. The both the Hadley Centre of the U.K. Met Office (HadCRUT) and the U.S. National Climatic Data Center finds a rise of +0.17 C per decade; and the Japan Meteorological Agency shows an increase of +0.14 C per decade. In other words, according to surface records the planet has warmed by between 0.7 and 0.55 degrees Celsius in the last 40 years, a difference of 0.15 degrees Celsius.

Back in 2010 University of California, Berkeley physicist and self-proclaimed climate change skeptic Richard Muller founded the nonprofit Berkeley Earth Surface Temperature project aimed at independently checking the temperature trends devised by other research groups. To do so, the Berkeley Earth team created and analyzed a merged dataset by combining 1.6 billion temperature reports from 16 pre-existing data archives derived from nearly 40,000 unique weather stations using raw data whenever possible. In 2013, Berkeley Earth reported a rise in average world land temperature of approximately 1.5 degrees Celsius in the past 250 years and about 0.9 degrees in the past 50 years. In their 2018 report the group finds that since 1980, the overall global trend (land and sea) is +0.19 C per decade and has changed little during this period. Basically, it is slightly higher than the other surface temperature records. 

The European Centre for Medium-Range Weather Forecasts (ECMWF) ERA-I reanalysis calculates since 1979 an increase of global average temperature in the lower troposphere (surface to 6 miles up) of +0.14 C per decade; the ECMWF’s updated ERA-5 reanalysis reckons a per decade increase of +0.16 C. The Japanese 55-Year Reanalysis (JRA-55) estimates an increase of +0.17 C per decade and NASA’s Modern Era Retrospective Analysis for Research and Applications Version 2 (MERRA-2) reports a rise of +0.16 C per decade. The differences in global temperature increase since 1979 between the reanalyses is even narrower ranging from 0.66 to 0.55 degrees Celsius. 

The State of the Climate in 2018 reports satellite data temperature trends since 1979 (when satellite measurements began) for the lower troposphere. The University of Alabama in Huntsville (UAH) trend is +0.13 per decade, while Remote Sensing Systems’ (RSS) trend is +0.20 degrees per decade. The difference in the two long-term lower tropospheric trends is more substantial. UAH reports warming since 1979 of only about 0.51 degrees whereas the RSS finds an increase of 0.78 degrees Celsius, a substantial difference of 0.27 degrees Celsius. 

Which temperature records should be considered as the more accurate is hotly disputed among climate scientists. For example, atmospheric scientist John Christy, one of the developers of the UAH satellite dataset, asserts that it is more accurate because unlike the RSS record it removes spurious warming measurements that occurred as the orbits of a couple of NOAA satellites decayed around the turn of the 21st century. In addition, Christy argues that the UAH temperature dataset has been validated through comparing it to weather balloon temperature data. 

It is notable that the four satellite datasets all based on the same raw data find very different global temperature trends. For example, in the lower atmosphere RSS reports about 60 percent more warming than does UAH. Lawrence Livermore National Laboratory climate scientist Stephen Po-Chedley who has helped to develop a different satellite temperature record at the University of Washington observes, “These records are useful, but have substantial uncertainty.” The “structural uncertainty” in the satellite records occurs Po-Chedley explains “because researchers use different approaches to remove known biases that affect long-term trends. No method is perfect, which leads to widely varying estimates of atmospheric warming.” 

Carl Mears, one of the developers of the RSS satellite dataset, disputes claims that the satellite and radiosonde temperature records are more accurate than the surface temperature record. “I consider this to be unlikely (even though I developed one of the satellite records) as indicated by the smaller spread in trends in the surface record than in the satellite record,” he states

The UAH record is something of an outlier with respect to the surface, reanalyses and other satellite records. Of course, that does not mean that it’s wrong, but everyone must take into account the balance of the evidence when considering what the rate of global warming has been. 

Are Climate Models Running Too Hot?

The differences between the UAH’s relatively lower tropospheric temperature increase trend and the generally higher surface trend increases is at the center of a fierce debate over how man-made global warming will play out in the course of this century. The chief researchers who have developed and oversee the UAH satellite dataset are atmospheric scientists John Christy and Roy Spencer. While both acknowledge that adding carbon dioxide to the atmosphere likely does contribute to some warming, they doubt that future climate change will produce an “uninhabitable earth.” 

Christy and his colleagues argue in a 2018 article that the mid-tropospheric temperature observations in the crucial tropics are way lower than those projected by most computer climate models. Christy summarized his results in a 2019 report, “The Tropical Skies: Falsifying climate alarm,” for the U.K.-based Global Warming Policy Foundation. Christy notes that most climate models project significant warming in the tropical troposphere between latitudes 20 degrees north to 20 degrees south of the equator at 30,000 and 40,000 feet.

Christy argues that this missing “tropical hotspot” shows that “the consensus of the models fails the test to match the real-world observations by a significant margin.” At a 2017 congressional hearing, Christy had earlier testified, “As such, the average of the models is considered to be untruthful in representing the recent decades of climate variation and change, and thus would be inappropriate for use in predicting future changes in the climate or for related policy decisions.”

Christy notes that the average of 102 climate model simulations project tropical troposphere temperature trend of +0.328 C/decade. In the State of the Climate in 2018 report the four decadal satellite tropospheric trends are: UAH +0.12 C; RSS +0.17 C; NOAA +0.22 C, and UW +0.16 C. In addition, the average for the reanalyses is +0.14 C. In other words, the tropical troposphere in the models are warming about two to three times faster than the actual temperatures in the tropical troposphere. On its face, this difference between model projections and temperature data makes Christy’s point that the climate models are getting a very important feature related to future global warming badly wrong. Christy’s research was cited in an August 2019 op-ed, “The Great Failure of the Climate Models,” in The Washington Examiner by climatologist Patrick Michaels and climate statistician Caleb Stewart Rossiter who are now both associated with the CO2 Coalition.

In a 2017 Journal of Climate article, Lawrence Livermore climate researcher Benjamin Santer and his colleagues acknowledged that “model–data differences in the vertical structure of atmospheric temperature change in the deep tropics—is a long-standing scientific concern.”Santer and his colleagues published one such effort to address these scientific concerns in a 2017 Nature Geosciences article. In that article, they suggested that the differences between projections and empirical trends occurred due to a combination of fickle natural climate variability, uncertainties in satellite temperature datasets, and sporadic external effects such as cooling from volcanic eruptions that could not be included in the model simulations. Even so, the article concluded, “Our analysis is unlikely to reconcile divergent schools of thought regarding the causes of differences between modelled and observed warming rates in the early twenty-first century.” As the ongoing research pursued by Christy and his colleagues shows, divergent schools of thought have indeed not reconciled.

Greenhouse theory predicts that warming at the surface will be amplified in the troposphere due to increased evaporation and convection. Basically, warmer air tends to rise. Climate model calculations project an overall tropospheric warming that is 1.2 times faster than at the surface. In the tropics, where most of the moisture is, the amplification factor is larger, about 1.4 to 1.6. 

It is worth noting that Christy is comparing actual tropical temperature trends to modeled temperature trends. Lawrence Livermore atmospheric scientist Stephen Po-Chedley counters that “the model amplification should compare the model surface trend with the model atmospheric temperature trend. And the observed amplification should be the observed surface trend with the observed atmospheric temperature trend.” He adds, “When models have sea surface temperatures that are forced to match the observations, the atmospheric warming in those model simulations matches the satellite record.”

So let’s go to the data. The first column in the table below contains tropical decadal sea surface temperature trends since 1979 between latitudes 20 N and 20 S as measured by four different research groups. In the second column are the actual satellite tropospheric trends over the same region as measured by four different research groups. Averaging the tropical sea surface temperatures yields a rate of increase of about +0.11 C per decade. Multiplying that average by the 1.5 tropospheric warming amplification factor used by the climate models yields a projected increase in tropospheric temperatures of +0.165 C per decade. This is basically in line with the increase of nearly +0.17 C per decade derived from averaging the four tropospheric temperature trends.

 

When actual surface data are taken into account, the tropical tropospheric temperature trend rises as pretty much as the models project. On the other hand, it is evident that the models are projecting higher tropical surface temperature trends than have been the case.

In an August 2019 Geophysical Research Letters article, MIT climate scientist Alexandre Tuel sought to explain the differences between recent model and satellite tropospheric warming rates notes that the climate models’ projections for the rate of tropical surface warming since 1979 between latitudes 30 N and 30 S is about +0.19 C per decade.

The average rate of tropical surface warming between latitudes 20 N and 20 S for the climate models cited by the IPCC is +0.21 C per decade. Applying the amplification factors of 1.4 to 1.6 yields a projected tropical troposphere temperature increase of +0.27 C to +0.30 C per decade and +0.29 C and +0.34 C per decade respectively. Those figures are about the same as the projected model rate for the tropical troposphere cited by Christy. As Tuel concludes, “The key to explaining recent tropical troposphere temperatures trends lies in understanding why tropical sea surface temperature trends are smaller in models than observations.”

So what is going on with the model projections for tropical sea surface temperatures? The mismatch arises chiefly in the vast Pacific Ocean. Generally speaking, due to upwelling colder water the eastern part of the Pacific near South America remains much cooler than the western part near the Philippines and Indonesia (except during El Niño events).

In a June 2019 Nature Climate Change study Columbia University climate researcher Richard Seager and his colleagues note that the models project that rising greenhouse gases will warm the colder east reducing the temperature differences between east and west. However, 60 years of temperature data have actually found that the opposite is occurring, the east is getting cooler and the west is warming up. Seager’s team finds that increases in greenhouse gases are having the effect of boosting temperatures in the already warm west which in turn strengthens the winds in the east that intensifies the upwelling of colder water in the east. Seager points out that pattern is akin to La Niña events and will likely drive La Niña-like climate trends worldwide including “drying in East Africa, southwest North America and southeast South America, and wetting in Southeast Asia, Northeast Brazil and the Sahel.”

In his Global Warming Policy Foundation report, Christy pointedly observes that the preliminary tropospheric temperature trends are even hotter in the set of 42 climate models whose outputs will be used in the Sixth Assessment Report (AR6) issued by the Intergovernmental Panel on Climate Change in 2021. Although MIT’s Tuel has not yet had time to analyze the new model outputs, he says, “I wouldn’t be surprised that systematic sea surface temperature biases like the Pacific cold tongue have not been corrected” in that set of models. 

Christy and Michaels are certainly right when they point out that the models get tropical tropospheric temperature trends wrong, but the source of the models’ error apparently lies in the oceans, not in the skies. The upshot, as Seager notes, is that there is an “urgent need to improve how well state-of-the-art models simulate the tropical Pacific so that the next generation of models can more reliably simulate how it responds to rising GHGs [greenhouse gases].”

Early 20th Century Warming

In their op-ed Michaels and Rossiter note, “Globally averaged thermometers show two periods of warming since 1900: a half-degree from natural causes in the first half of the 20th century, before there was an increase in industrial carbon dioxide that was enough to produce it, and another half-degree in the last quarter of the century.” Their implication is that the current warming could be largely natural as well. It is worth noting that the earlier warming (~0.3–0.4 degrees C) was actually about a third to a half of the warming since the 1970s (~0.8–0.9 degrees C). 

In addition, an August 2019 article in the Journal of Climate by Oxford University climate data scientist Karsten Haustein and his colleagues analyzed the evolution of temperature trends during the 20th century. They concluded that the early warming and mid-century cooling interludes could be almost entirely explained once the effects of rising aerosol pollutants, periodic volcanic eruptions, and spurious warming in some sea surface temperature records were accounted for. If they are right, warming due to accumulation of greenhouse gases has been proceeding for more than a century and is speeding up. Of course, it’s early days, so it remains to be seen if these results stand the test of time and further analysis. 

The Global Warming Hiatus 

The increase in average global temperature appeared to slow down dramatically between 1998 and 2015 even as greenhouse gases continued steadily to accumulate in the atmosphere. The IPCC’s 2014 Synthesis Report acknowledged that the rate of surface warming since 1998 had been only 0.05 degrees Celsius per decade, which is considerably lower than the 0.12 degrees Celsius per decade rate observed since 1951. This “hiatus” was seen as evidence by skeptics (and reported by me) that climate model projections of fast and dangerous man-made warming were way overblown. For nearly a decade most climate researchers ignored the hiatus handwaving that warming would soon resume as projected. Eventually, the mismatch could no longer be ignored. Perplexed researchers sought to explain the slowdown in articles that placed the blame on a range of possibilities spanning from changes in solar radiation and stratospheric water vapor to burying excess heat in the deep oceans and natural internal variations in climate. By 2016, researchers had published nearly 200 peer-reviewed studies on the topic.

In the course of this research, many climate scientists came to realize that comparing the lower global temperature trend to the climate model average was obscuring the fact that many of the models actually produced internal climate variability with slowdowns very much like the hiatus. In fact, global climate model runs indicated that internal variability in ocean temperatures and heat uptake can mask long-term man-made warming for periods lasting more than a decade. As discussed above, that seems to be what caused the divergence between model and observed tropical temperature trends. In addition, updates and corrections to surface temperature records later made it clear that warming largely unnoticed had actually continued more or less unabated. 

In fact, research by University of Exeter climate data scientist Femke Nijsse and her colleagues published in the July 2019 Nature Climate Change counter-intuitively finds that “high-sensitivity climates, as well as having a higher chance of rapid decadal warming, are also more likely to have had historical ‘hiatus’ periods than lower-sensitivity climates.” By high sensitivity, Nijsse means that average global temperature could potentially increase by +0.7 C in just one decade. If she is right, the early 21st century hiatus could literally be the cooler calm before the warming storm. 

In any case, the hiatus came to an end when a super El Nino event in the Pacific Ocean substantially boosted global temperatures making 2016 hottest year since more or less accurate instrumental records started being kept in the 19th century. Even in the lower trending UAH dataset, 2016 edged out 1998 by +0.02 C to become the warmest year in that 38-year satellite record. Christy did observe that “because the margin of error is about 0.10 C, this would technically be a statistical tie, with a higher probability that 2016 was warmer than 1998.” 

During the hiatus period, Christy argued that the climate models were clearly wrong because their projections were warming the bulk atmosphere at about twice the rate reported by satellite and balloon temperature trend observations. However, the 2016 El Nino event pushed the model projections and observational temperature trends more or less into alignment. In November, University of Guelph economist and frequent Christy scientific collaborator Ross McKitrick asserted, “The El Nino disguised the model-observational discrepancy for a few years, but it’s coming back.” McKitrick evidently expects that as the effects of the last El Nino ebb it will become undeniable by around 2030 that the models are projecting much too much warming. 

On the other hand, in an April 2019 International Journal of Climatology article, a team of Chinese atmospheric scientists try to figure out how the long-term warming trend affected both the 1998 and 2015/2016 super El Ninos and what that suggests about future warming. Using five different surface datasets they calculate that in 1998 the El Nino event added +0.18 C to the long-term warming trend whereas in 2016, that El Nino event added just +0.06 to the long-term warming trend. In other words, it took a lot less heat to boost the 2015/2016 El Nino to slightly above the level of the 1998 El Nino. They report that their analysis “implies that warmer years like 2014–2016 may occur more frequently in the near future. We conclude that the so-called warming hiatus has faded away.” If these researchers are right, future El Ninos may well temporarily boost global temperature trends above the model projections. In which case McKitrick’s expectations that model results and observational trends will again significantly diverge over the coming decade are likely to be disappointed.

The record warmth of 2016 has so far not been exceeded, but surface temperature records report that nine of the 10 warmest years have occurred since 2005, with the last five years comprising the five hottest

Rising Seas 

One possible consequence of man-made global warming is that the melting of glaciers and the Greenland and Antarctic ice sheets will boost sea level and inundate coastal cities. It is generally agreed that the oceans over the past century have risen by an average of about 7 to 8 inches. Former Georgia Tech climatologist Judith Curry issued in November 2018 a special report, Sea Level and Climate Change. Curry concluded that recent changes in sea level are within the range of natural variability over the past several thousand years and there is not yet any convincing evidence of sea-level rise associated with human-caused global warming. 

The IPCC’s AR5 report suggested that average sea level rose by 7.5 inches between 1901 and 2010. The IPCC also reported that sea level very likely rose at a rate of about 1.7 millimeters (0.07 inch) per year between 1901 and 2010, but had accelerated to 3.2 millimeters (0.13 inch) between 1993 and 2010. If the rate does not increase, that would imply that sea level would rise by an average of 10 inches by 2100. In fact, that is the IPCC’s low-end estimate while its high-end projection is nearly 39 inches depending on how much extra carbon dioxide is emitted into the atmosphere during the rest of this century.

A February 2018 study in the Proceedings of the National Academy of Sciences based on satellite altimeter data reported that sea-level rise at 3 millimeters per year has accelerated at a rate of +0.084 millimeters (about half the thickness of a penny) since 1993. If sea level continues to change at this rate and acceleration, the researchers estimate that average sea-level rise by 2100 will be closer to 24 inches than 10 inches in 2100. 

Curry counters, however, that the calibrations to the satellite altimeter data are far larger than the resulting changes in global mean sea level reported in that study. Be that as it may, another study in Nature Climate Change published in August 2019, found “persistent acceleration in global sea-level rise since the 1960s.” The new study reports that sea-level rise has been accelerating at a rate of 0.06 millimeters per year since 1960, bolstering the earlier finding that sea level increase is accelerating. All things considered, Curry concludes that “values exceeding 2 feet are increasingly weakly justified.” In other words, Curry also accepts that sea level could possibly rise about three times more than it did over the last century. 

All Models Are Wrong 

So even though the models appear essentially OK with respect to their tropical troposphere projections once actual sea surface temperatures are inputted, do their mistaken Pacific Ocean surface temperature projections invalidate them? The Science and Environmental Policy Project headed by climate change skeptic Kenneth Haapala “questions the use of models for public policy unless the models have been appropriately verified and validated. No matter how elaborate, the results from numerical models that are not thoroughly tested against hard evidence are speculative and cannot be relied upon.” So what would count as validating climate models? 

One commonplace notion is that scientific validation is achieved only when researchers develop a hypothesis and then design experiments to test it. If the experimental data contradict the hypothesis, it is rejected (or at least reformulated). Climate science however is an observational, not an experimental, science. In a sense, climate models are gigantic hypotheses, but the empirical data with which to check their predictions lies in the future. 

Swiss Federal Institute of Technology environmental philosopher Christoph Baumberger and his colleagues address in their 2017 WIREs Climate Change article the issue of building confidence in climate model projections. They note that the most common way to evaluate climate models is to assess their empirical accuracy (how well model results fit past observations), robustness (how well they match the outputs of other models), and coherence with background knowledge (the support of model equations by basic theories). Nevertheless, they acknowledge that these three assessment criteria “neither individually nor collectively constitute sufficient conditions in a strict logical sense for a model’s adequacy for long-term projections.” 

With respect to the adequacy of climate models (or of any other models for that matter), keep firmly in mind British statistician George Box’s aphorism, “All models are wrong, but some are useful.” Climate models certainly serve the heuristic function of helping climate researchers to better understand over time the feedback effects of the mind-bogglingly complicated interconnections between the atmosphere, the oceans, and the land. But how have they done with global warming projections? 

Fairly well it turns out, according to a forthcoming evaluation by climate data scientist Zeke Hausfather and his colleagues of the projected warming trends in 17 different historical climate models published between 1970 and 2007. In their analysis the researchers also took into account mismatches between actual carbon dioxide emissions and other factors (effects of volcanic eruptions) that the modelers could not anticipate in order to assess the performance of the models’ physics. The result was that 14 of the 17 model forecasts were consistent with the trends in the range of five different observational surface temperature time series. 

Many critics have pointed out this notable emissions trajectories mismatch. Actual economic growth patterns during the past decades strongly suggest that future emissions will more closely track those projected by the more moderate IPCC scenarios and that the specific scenario featuring high emissions is exceedingly implausible. In the high emissions scenario, energy efficiency and carbon intensity (the amount of carbon dioxide emitted per dollar of GDP) gains that have been advancing for decades stall and the global energy system improbably re-carbonizes rapidly as it burns ever more coal, natural gas, and oil. Unfortunately, in many climate science studies and in popular reporting, outputs based on the high emissions scenario have been often treated as plausible business-as-usual projections instead of dubious worst-cases. One hopes that more credible socioeconomic and emissions scenarios will be developed as inputs for the next round of climate modeling to be used for the IPCC’s upcoming Sixth Assessment Report. 

In July, a so-far non–peer reviewed study by several young climate researchers at MIT report similar results when they assessed the projections of the 15 climate models used in the IPCC’s Second Assessment Report (SAR) back in 1995. Their study aims to “probe the relationship between model hindcast skill and model forecast skill.” In other words, do models that get trends in the past right also tend to get future trends right? 

In order to figure out how well SAR models projected “future” temperature trends, the MIT researchers compare the model projections made back in 1995 to the observed global warming temperature trends between 1990 and 2018. They find that multi-model mean of the models “accurately reproduces the observed global-mean warming over a 1920-1990 hindcast period and accurately projects the observed global-mean warming over the 1990-2018 nowcast period.” On that basis they boldly conclude, “Climate change mitigation has now been delayed long enough for the first projections of anthropogenic global warming to be borne out in observations, dismissing claims that models are too inaccurate to be useful and reinforcing calls for climate action.” They do dryly observe that whether increasingly complicated modern models will prove to be more accurate “is yet to be determined.” As we shall see below this may be a live issue with the set of models being used for the IPCC’s Sixth Assessment Report in 2021. 

While past performance is no guarantee of future results, at least with respect to projecting global average temperature trends, these historical climate models appear to have met the confidence building tests of empirical accuracy, robustness, and background knowledge coherence. In other words, they have proven useful. 

The Cloud Wildcard

About 30 percent of incoming sunlight is reflected back into space with bright clouds being responsible for somewhere around two-thirds of that albedo effect. In other words, clouds generally tend to cool the earth. However, high thin cirrus clouds don’t reflect much sunlight but they do slow the emission of heat back into space, thus they tend to warm the planet. In the current climate, clouds reflect more sunlight than they absorb and re-emit as heat downward toward the surface, so that on balance the earth is cooler because it has clouds than it would be than if it had no clouds.

In his 2018 lecture “The Role of Clouds in Climate,” NASA Goddard Institute for Space Science atmospheric scientist Anthony Del Genio notes, “It has often simplistically been assumed that clouds will offset greenhouse gas-induced climate change, based on the logic that warming evaporates more water from the ocean, which causes more clouds to form, which increases the albedo, which offsets the warming.” However, most computer climate models project total cloud climate feedbacks ranging from near-neutral to strongly positive. What’s going on?

How clouds will react to warming is one of the largest feedback uncertainties with respect to future climate change. The processes that form clouds are below the spatial resolution of climate models so researchers make estimates of how much sunlight they reflect and how much they absorb and then input those values into the models. The balance between cloud reflection and absorption matters a lot.

Researchers at the Pacific Northwest National Laboratory for example in a 2004 Journal of Applied Meteorology article noted that “a 4% increase in the area of the globe covered by marine stratocumulus clouds would offset the predicted 2–3 [degree C] rise in global temperature due to a doubling of atmospheric carbon dioxide.” Marine stratocumulus clouds commonly form over cold ocean waters off the west coasts of continents. They are generally thin low clouds and cover more of the Earth’s surface than any other cloud type making them extremely important for Earth’s energy balance, primarily through their reflection of solar radiation.

On the other hand, wispy cirrus clouds that occur up to 20 kilometers above the surface let sunlight through but absorb and reflect infrared back downward to heat the surface. In 2001, MIT climatologist Richard Lindzen and his colleagues pointed to evidence in a Bulletin of the American Meteorological Society that cirrus clouds over the tropics tended to dissipate as temperatures increased. Such a process would serve as a negative feedback that, according to Lindzen and his colleagues, “would more than cancel all the positive feedbacks in the more sensitive current climate models.” They likened this process to “an adaptive infrared iris that opens and closes in order to control the Outgoing Longwave Radiation in response to changes in surface temperature in a manner similar to the way in which an eye’s iris opens and closes in response to changing light levels.”

 On the other hand, newer research suggests that rising temperatures will tend to dissipate low marine stratocumulus clouds which would generate a positive feedback that increases warming. In addition, changes in where clouds are located have big feedback effects. Climate models predict and preliminary satellite data finds that mid-latitude storm tracks (and their clouds) are retreating poleward, subtropical dry zones (deserts) are expanding, and the height to the highest cloud tops are rising. All three processes tend to increase global warming. “The primary drivers of these cloud changes appear to be increasing greenhouse gas concentrations and a recovery from volcanic radiative cooling,” conclude Scripps Institution of Oceanography climatologist Joel Norris and his colleagues. “These results indicate that the cloud changes most consistently predicted by global climate models are currently occurring in nature.”

Two different groups have lately revisited Lindzen’s iris effect. One team in 2017 reported finding that increased sea surface temperatures boosted precipitation over the tropics. This, in turn, tended to reduce cirrus cloud cover allowing more infrared to escape into space which resulted in cooling. More recently, another group in 2019 analyzing trends in the western Pacific found that increasing sea surface temperatures tended to increase the amount of cirrus cloud cover slightly, generating a positive warming feedback.

Even though the details of how changes in clouds will affect future climate are still unsettled, Del Genio argues, “It is implausible that clouds could substantially offset greenhouse warming at this point in history.” Why? “There is just no plausible physical mechanism that we can point to that would do that, nor is there any evidence in data that such a mechanism exists, nor is there any way one can possibly explain the observed warming of the past 60-70 years if that is the case,” he explains.

More worryingly recent climate model research suggests that high atmospheric concentrations of carbon dioxide (1200 parts per million) could yield a tipping point in which cooling stratocumulus clouds are vastly dissipated. Such a break-up of low-level clouds would trigger a surface warming of about 8 C globally and 10 C in the subtropics. This scenario was bolstered by a September 2019 study in Science Advances seeking to simulate the climate of the Paleocene-Eocene Thermal Maximum (PETM) some 56 million years ago. Geological evidence indicates during the PETM that carbon dioxide levels were around 1,000 parts per million and that the Earth’s surface was then at least 14 degrees Celsius warmer on average than it is now. The poles were ice-free. The research suggests that increases in carbon dioxide during the PETM produced a feedback process that greatly reduced low-level clouds which in turn further substantially boosted surface temperatures.

Ultimately Del Genio observes, “We think that clouds are likely to be a positive feedback, but we are not yet sure whether they are a small or large positive feedback. They could even be neutral. Many of the most recent climate models are predicting a fairly large cloud feedback (our GISS model is not one of them), but the jury is out on whether that is a reasonable result or not.” 

The Most Important Number 

Scientific American in 2015 called equilibrium climate sensitivity “the most important number in climate change.” Equilibrium climate sensitivity is conventionally defined as the increase in Earth’s average surface temperature that would occur if carbon dioxide concentrations in the atmosphere were doubled and the climate system was given enough time to reach an equilibrium state. In 1979, the Charney Report from the U.S. National Academy of Sciences first conjectured that ECS was likely somewhere between 1.5 C and 4.5 C per doubling of CO2. The Intergovernmental Panel on Climate Change’s Fifth Assessment Report (AR5) published in 2013 concluded that ECS is likely to be 1.5 C to 4.5 C. That is, nearly four decades later, the best estimate of sensitivity is largely the same.

Since the Charney report, climate researchers have reported more than 150 estimates of equilibrium climate sensitivity (ECS). Although the AR5 report did not offer a best estimate for ECS, the average for the models used in that report is 3.2 C. Just in 2018, statistician Nicholas Lewis and climatologist Judith Curry published in the Journal of Climate a median ECS estimate of 1.66 C with a range of 1.15–2.7 C.This is well below the IPCC’s range and about half of the model average. 

However, Texas A&M climate scientist Andrew Dessler and his colleagues also in 2018 estimated in the Journal of Geophysical Research: Atmospheres that median ECS was 3.3 C and likely ranged between 2.4 to 4.6 C. They added, “We see no evidence to support low ECS (values less than 2C) suggested by other analyses. Our analysis provides no support for the bottom of the Intergovernmental Panel on Climate Change’s range.” Another group of researchers associated with MIT estimated in 2006 that upper bound of ECS could be as high as 8.9 C. That figure is basically twice the temperature increase that ended the last ice age. 

The ECS estimates in the lower range generally are derived from analyzing historical temperature observations. University of Reading climate modeler Jonathan Gregory and his colleagues published a study in October arguing that the historical temperature data upon which they are based may be skewed downward by, among other things, an anomalously cooler historic period due to internal climate variability along with the additional cooling effects of industrial aerosol pollutants and volcanic eruptions. However, independent climate research statistician Nicholas Lewis recently countered that Gregory and his colleagues used flawed statistical methods to obtain their results. Time will tell how this shakes out. 

There is great socioeconomic value in pinning down ECS. The larger that ECS is, the faster temperatures will increase and the higher they will go. The upshot is that the higher that ECS is, the worse the effects of climate change are liable to be. Conversely, the smaller ECS is, the slower that temperatures will rise and the lower they will go. A smaller ECS would mean that humanity has more time to address and adapt to future climate change. It is worth noting that the ECS values used in the historical models evaluated by Hausfather and his colleagues fit within the IPCC’s AR5 range. 

Researchers relying on three strands of evidence that include increased understanding of climate feedbacks, the historical climate record, and the paleoclimate record find that they are together pointing toward a narrower span of plausible ECS boundaries. These analyses are converging on a likely ECS ranging between 2.2 to 3.4 C and further indicate a very likely ECS range of between 2 C to 4 C. If this research proves out, this is good news since it would strongly imply that the higher and much more catastrophic ECS projections are improbable. 

But hold on, some preliminary ECS estimates from the set of 42 next-generation climate models that the IPCC will be referencing in its 2021 Sixth Assessment Report (AR6) are considerably more worrisome. Currently several of those models are reporting an ECS of 5 degrees Celsius or hotter. The researchers, who are not at all sure about why their models are producing these results, are probing further to see if the high estimates will stand after deeper scrutiny. 

Over at RealClimate NASA Goddard Institute for Space Studies director Gavin Schmidt urges caution before accepting these preliminary model results with respect to ECS. “Why might these numbers be wrong?,” he asks. “Well, the independent constraints from the historical changes since the 19th C, or from paleo-climate or from emergent constraints in [earlier climate] models collectively suggest lower numbers (classically 2 to 4.5ºC) and new assessments of these constraints are likely to confirm it.” 

In fact, as noted above the latest assessments of ECS based on historical, paleoclimate, and feedback data have narrowed the range of estimates considerably below these new model outputs. “For all these constraints to be wrong, a lot of things have to fall out just right (forcings at the LGM [last glacial maximum] would have to be wrong by a factor of two, asymmetries between cooling and warming might need to be larger than we think, pattern effects need to be very important etc.),” points out Schmidt. “That seems unlikely.” 

Conclusions

Assuming that the new much higher ECS estimates do happily turn out to be wrong, the earlier ECS estimates still suggest that it is unlikely that humanity can avoid substantial climate change if the atmospheric concentration of carbon dioxide doubles over the pre-industrial level of 280 ppm, that is, 560 ppm. In recent years, carbon dioxide has been increasing in the atmosphere at an annual rate just under 3 parts per million (ppm) reaching 415 ppm this year. If that rate of increase continues, it will take about 50 years to reach 560 ppm. 

So, what would the average global temperature be around 2070 when atmospheric carbon dioxide has doubled? This is where another quantity, transient climate response (TCR), becomes relevant. TCR is generally defined as what the average global temperature would be when carbon dioxide atmospheric concentrations growing at 1 percent per year reaches the doubling point over a period of about 70 years. The average TCR is 1.8 C in models cited in the IPCC’s AR5 report. Not surprisingly, the lower the ECS is calculated to be, the lower the TCR will be. For example, Lewis and Curry calculated their median TCR as 1.2 C (range 0.9 to 1.7 C). 

Considering that the planet has already warmed by about a degree Celsius as the atmospheric carbon dioxide concentration rose by 45 percent, lower TCR estimates seem unlikely. Assuming that global warming proceeds at the NOAA rate of +0.17 C per decade that adds up to an increase of around +0.85 C by 2070. Since average global temperatures have increased by 1 C since the late 19th century an additional +0.85 would more or less match the climate model TCR average of 1.8 degrees per doubling of carbon dioxide. Of course, the warming wouldn’t stop then. 

“Is it too late (to stop dangerous climate change)?,” asks University of Cambridge climate researcher Mike Hulme in his October editorial introducing a special issue of the journal WIREs Climate Change devoted to the question. Given how long I have been reporting on climate change, I identify with his world-weary observation, “There is a long history of climate deadlines being set publicly by commentators, politicians and campaigners…and then of those deadlines passing with the threat unrealized.” 

Hulme pointedly notes that “deadline-ism” as embodied in the Green New Deal “does not do justice to what we know scientifically about climate change.” Climate change prediction science reports “a range of possible values for future global warming. It is as false scientifically to say that the climate future will be catastrophic as it is to say with certainty that it will be merely lukewarm.” He adds, “Neither is there a cliff edge to fall over in 2030 or at 1.5 degrees C of warming.” 

Continued economic growth and technological progress would surely help future generations to handle many—even most—of the problems caused by climate change. At the same time, the speed and severity at which the earth now appears to be warming makes the wait-and-see approach increasingly risky.

Will climate change be apocalyptic? Probably not, but the possibility is not zero. So just how lucky do you feel? Frankly, after reviewing recent scientific evidence, I’m not feeling nearly as lucky as I once did.

from Latest – Reason.com https://ift.tt/34fkafy
via IFTTT

House Republicans Are Spreading ‘Fictional Narrative’ on Ukrainian Election Interference, Says Former Top White House Adviser

Fiona Hill, a former top White House expert on Russia, told congressional investigators today that allegations of Ukrainian election interference are not based in fact. By continuing to promote this theory, she argued, Republicans on the House Intelligence Committee are emboldening Russian aggression.

“Based on questions and statements I have heard, some of you on this committee appear to believe that Russia and its security services did not conduct a campaign against our country—and that perhaps, somehow, for some reason, Ukraine did,” Hill told the House Intelligence Committee. “This is a fictional narrative that has been perpetrated and propagated by the Russian security services themselves.”

Hill, who served under presidents George W. Bush and Barack Obama as well as Donald Trump, said that Russia has a vested interest in sowing discord within the U.S. and in placing scrutiny on Ukraine.

“As Republicans and Democrats have agreed for decades, Ukraine is a valued partner of the United States, and it plays an important role in our national security,” Hill testified. “And as I told this Committee last month, I refuse to be part of an effort to legitimize an alternate narrative that the Ukrainian government is a U.S. adversary, and that Ukraine—not Russia—attacked us in 2016.”

Trump is currently the subject of an impeachment inquiry, which is based partly on accusations that he temporarily withheld $400 million in security assistance from Ukraine in order to push its president, Volodymyr Zelenskiy, to publicly announce an investigation into the theory that Ukraine interfered in the 2016 election to help Democratic candidate Hillary Clinton. 

President Vladimir Putin’s objective, Hill declared, is to “delegitimize our entire presidency” by shrouding the U.S. democratic process and the rightly elected candidates in doubt and to “pit one side of our electorate against each other.”

David Holmes, a career diplomat, testified Thursday that the claims of Ukrainian election interference are part of a three-pronged approach by Russia: to “deflect from the allegations of Russian interference,” to “drive a wedge between the United States and Ukraine,” and to “degrade and erode support for Ukraine.”

Republicans have cited a Politico article from 2017 to support their claims of Ukrainian election interference, arguing that Trump did not push for the probe for partisan gain but because he wanted to curb corruption. 

The piece, penned by Kenneth Vogel and Dan Stern, elaborated on Ukrainian efforts to spread unflattering documents about Paul Manafort, Trump’s former campaign chairman. It also noted a Ukrainian official’s op-ed that criticized Trump’s position on Russia’s annexation of Crimea.

Marie Yovanovitch, the former ambassador to Ukraine, argued in her testimony last Friday that those were “isolated incidents” that do not compare with Russia’s methodical efforts. The same Politico piece makes an identical concession, saying that there is “little evidence of such a top-down effort by Ukraine.”

“There’s an effort to take a tweet here, and an op-ed there, and a newspaper story here, and somehow equate it with the systemic intervention that our intelligence agencies found that Russia perpetrated in 2016 through an extensive social media campaign and a hacking and dumping operation,” said Chairman Rep. Adam Schiff (D–Calif.). “The House Republican report is an outlier,” he said, contradicting the findings of the Senate’s bipartisan Intelligence Committee, the FBI, and the House Intelligence Committee.

“There were certainly individuals in many other countries who had harsh words for both of the candidates,” Hill replied. But what the Russians wanted to do, she said, was different, characterized by an attempt “to create just the kind of chaos that we have seen in our politics.” Allegations of Ukrainian interference, according to Hill, are just another means to that end, providing “more fodder than they can use against us in 2020.”

from Latest – Reason.com https://ift.tt/2s1lmoh
via IFTTT

The Anybody-but-Warren Primary

Heading into tonight’s Democratic primary debate, two major developments were in play. The first was the Medicare for All financing plan advanced by Sen. Elizabeth Warren (D–Mass.), and her subsequent introduction of a transition plan that arguably amounted to a retreat. The second was the rise of South Bend, Indiana, Mayor Pete Buttigieg, especially in Iowa, where he is now a top-tier contender.

What many observers expected, as a result, was that Buttigieg would be the primary target of the evening. Yet as the debate opened, it was Warren who took fire from all sides.

In a post-debate piece for The New York Times, I argue that the attacks on Warren indicate an increasing anxiety among Democrats about both her and her overarching political philosophy, which tries to unite the party’s moderate and progressive wings by blending tax-the-rich populism with technocratic, or at least faux-technocratic, specificity.

Although Warren has risen to the top tier, her balancing act may topple in the end, as Democratic voters (and her primary rivals) go looking for someone, anyone, who can present a viable alternative.

Here’s how the Times piece starts:

Although the Democratic primary debate began with a series of questions about impeaching President Trump, allowing the candidates to take shots at the Republican rival they all hope to face in the general election, it swiftly transformed into a referendum on another politician who has increasingly presented a challenge to the Democratic Party: Senator Elizabeth Warren.

Over the course of the year, the Massachusetts senator has vaulted into the top tier of Democratic candidates, and for the last several months she has vied for front-runner status. Yet in recent weeks, her momentum has seemed to slow as Democratic voters become anxious that her campaign for “big, structural change” is too liberal, too radical and too risky to trust in a high-stakes election against Mr. Trump.

You can read my feature on how Warren has used dubious academic research to fuel political goals in Reason‘s October edition.

from Latest – Reason.com https://ift.tt/2QPuoiT
via IFTTT