Did Federal Climate Scientists Fudge Temperature Data to Make It Warmer?

Best Temperature Photo“Right after the year 2000,” climate change
skeptic Tony Heller
claimed
last month, federal climate scientists “dramatically
altered US climate history, making the past much colder and the
present much warmer….This alteration turned a long term cooling
trend since 1930 into a warming trend.” Heller (nom de
blog
Steven Goddard) says that these adjustments ” cooled
1934 and warmed 1998, to make 1998 the hottest year in US
history instead of 1934.”

Heller’s assertions induced a frenzy of commentary, attracting
the attention of The Drudge Report, the
Telegraph, The Daily Caller, and Fox News. A few
days later, the hullabaloo was further stoked by reports that
scientists at the National Climatic Data Center (NCDC) had quietly

reinstated July 1936
as the hottest month on record in the
continental U.S. instead of July 2012. (For the record, the
National Oceanic and Atmospheric Administration—the NCDC’s parent
agency—has declared
2012 the hottest year
on record for the lower 48 states, and
the months between August 2011 and July 2012 as the
hottest 12-month period
on record. The year 2012 was also the

warmest year
in the 36-year satellite temperature record.)

In response to the brouhaha, the NCDC press office sent out a
rather defensive statement noting that its new
U.S. temperature dataset
based on climate division adjustments
has, indeed, restored July 1936 to its hellish pinnacle. “We
recalculate the entire period of record to ensure the most
up-to-date information and to ensure proper comparison over time,”
said the press release (which, oddly, is not available online). “In
this improved analysis, July 1936 is now slightly warmer than July
2012, by a similar very small margin.” It added that this “did
not significantly change overall trends for the national
temperature time series” and that the “year 2012 is still easily
the warmest on record.”

But never mind the quibbling over which month in the past
century was the hottest. Is Heller right when he claims that NCDC
scientists are retrospectively fiddling with the national
thermostat to bolster the case for man-made global warming?

The answer is complicated.

When Heller produced his temperature trend for the continental
United States, he basically took the raw temperature data from the
U.S. Historical Climatology Network from 1895 to the present and

averaged
them. He made no adjustments to the data to take into
account such confounders as changes in location, equipment, time of
observation, urban heat island effects, and so forth. Heller argues
that these changes more or less randomly cancel out to reveal the
real (and lower) trend in average U.S. temperatures.

In contrast, the researchers at the NCDC have spent years
combing through U.S. temperature data records trying to figure out
ways to adjust for confounders. In 2009, the NCDC researchers
detailed how they go about adjusting
the temperature data
from the 1,218 stations in the Historical
Climatology Network (HCN). They look for changes in the time of
observation, station moves, instrument changes, and changes in
conditions near the station sites (e.g., expanding cities). They
filter the data through various algorithms to detect such problems
as implausibly high or low temperatures or artifacts produced by
lazy observers who just keep marking down the same daily
temperatures for long periods.

They’ve clarified a lot this way. For example, simply shifting
from liquid-in-glass thermometers to electronic maximum-minimum temperature
systems
“led to an average drop in maximum temperatures of
about 0.4°C and to an average rise in minimum temperatures of
0.3°C.” In addition, observers switched their time of observation
afternoon to morning. Both of these changes would tend to
artificially cool the U.S. temperature record.

Urban areas are warmer than the countryside, so previous NCDC
researchers had to adjust temperature datasets account for the
effects of urban growth around weather stations. The center’s 2009
study conceded that many HCN stations are not ideally
situated
—that they now sit near parking lots, say, or building
HVAC exhausts. Such effects tend to boost recorded temperatures.
The researchers argue that they do not need to make any explicit
adjustments for such effects because their algorithms can identify
and correct for those errors in the temperature data.

Once all the calculating is done, the 2009 study concludes, the
new adjusted data suggests that the “trend in maximum temperature
is 0.064°C per decade, and the trend in minimum temperature is
0.075°C per decade” for the continental U.S. since 1895. The NCDC
folks never rest in their search for greater precision. This year
they recalculated the historical temperatures, this time by
adjusting data in each of the
344 climate divisions
into which the coterminous U.S. is
divvied up. They now report a temperature trend of 0.067°C per
decade.

The NCDC have also developed a procedure for infilling missing
station data by comparing temperatures reported from the nearby
stations. Why? Because as many as 25 percent of the original
stations that comprised the HCN are no longer running. Essentially,
the researchers create a temperature trend for each missing station
by interpolating temperature data from nearby stations that are
still operating. Skeptics like Heller argue that that the virtual
“zombie stations” that infill missing data have been biased to
report higher than actual temperatures.

Some sort of infilling procedure needs to be done. Let’s say
that there are records from five stations, all of which report time
series of 1, 2, 3, 4, and 5. The average of each therefore comes to
3. If two stations fail to report on the second day, missing
records of 2, then the average of their remaining four records is
now 3.25 instead of 3. In trying to address the problem of missing
data from closed stations, the NCDC folks average other stations to
fill in the absent 2s. According to climate change skeptic blogger
Brandon Shollenberger, what Heller does is the equivalent of
averaging the raw data from the notional five stations to report 3,
3, 3, 3.25, and 3.25. “He’d then accuse the people of fraud if they
said the right answer was 3, 3, 3, 3, 3,” Shollenberger writes.

Let’s assume that all of the NCDC’s adjustments are correct.
What do they reveal? The center’s 2009 study concluded, “Overall,
the collective effect of changes in observation practice in the
U.S. HCN stations is the same order of magnitude as the background
climate signal (e.g., artificial bias in maximum temperatures is
about -0.04°C per decade compared to the background trend of about
0.06°C per decade). Consequently, bias adjustments are essential in
reducing the uncertainty in climate trends.” In other words, the
asserted bias is almost as big as the asserted trend. Even with the
best intentions in the world, how can the NCDC be sure that it has
accurately sorted the climate signal from the data noise such that
it has in fact reduced the uncertainty in climate trends?

Well, for one thing, other scientists have found a similar
trend. Another group of researchers at Berkeley Earth use a different
statistical method in which any significant changes to the
temperature record of any station are treated as though a new
station had been created. They use eight times more data than the
NCDC does. Via email, Berkeley Earth researcher Zeke Hausfather
notes that Berkeley Earth’s breakpoint method finds “U.S.
temperature records nearly identical to the NCDC
ones (and quite different from the raw data), despite using
different methodologies and many more station records with no
infilling or dropouts in recent years.” He is also
quite critical
of Heller’s simple averaging of raw data.

The NCDC also notes that all the changes to the record have gone
through peer review and have been published in reputable journals.
The skeptics, in turn, claim that a pro-warming confirmation bias
is widespread among orthodox climate scientists, tainting the peer
review process. Via email, Anthony Watts—proprietor of Watts Up With That, a
website popular with climate change skeptics—tells me that he does
not think that NCDC researchers are intentionally distorting the
record. But he believes that the researchers have likely succumbed
to this confirmation bias in their temperature analyses. In other
words, he thinks the NCDC’s scientists do not question the results
of their adjustment procedures because they report the trend the
researches expect to find. Watts wants the center’s algorithms,
computer coding, temperature records, and so forth to be checked by
researchers outside the climate science establishment.

Clearly, replication by independent researchers would add
confidence to the NCDC results. In the meantime, if Heller episode
proves nothing else, it is that we can continue to expect
confirmation bias to pervade nearly every aspect of the climate
change debate.

from Hit & Run http://ift.tt/1mXqqia
via IFTTT

Leave a Reply

Your email address will not be published. Required fields are marked *