Global monthly temperatures since 1850

Global monthly temperatures since 1850 -- instrumental, satellite and reanalysis


     (Refresh your browser for the latest plot)

This shows global monthly temperature anomalies since 1850 from nine sources spanning three different estimation approaches. The plotted values are temperature anomalies — the difference between a particular month’s temperature and the average for that month of the year across the reference interval, which I’ve adjusted here to a common 1881 – 1920 for all the series (near to ‘pre-industrial’ conditions). The overlaid trend is a 4th order polynomial fit intended as a visual guide based on the data, but in no sense claimed to be representative of underlying processes.

These are near-surface¹ air temperature estimates. The global values are area-weighted averages, meaning that the estimates attempt to give equal weight to each square kilometre of the earth’s surface when computing the global average. That is simple for some estimation approaches but challenging for others. The three main approaches to estimating global temperature represented in the plot are:


1. Instrumental measurement

The obvious approach is to collect all the global weather station temperature records and average them. Unfortunately weather stations are nearly all on land, and land covers only about 30% of the earth’s surface, so more data is need. Fortunately, seawater temperatures have been routinely recorded by ships at sea for well over a century, and that huge dataset provides a good estimate of near-surface air temperature for much of the other 70%. Buoy network measurements augment ship-based data in recent decades.

Ship-based sea surface temperature observation density (NOAA)

Current ARGO float oceanographic measurement network (Wikipedia)

The chief problems with the instrumental approach are of course with the 70%, not the 30% (despite denialist noise about land-based weather station data quality). They have to do with things like the method of seawater sampling (engine intake water vs bucket over the side vs special insulated buckets) and the means of correcting for those. The biggest issue lies with the sea ice covered areas in the arctic and antarctic, where the air is insulated from the ocean by the floating ice, so seawater temperature is pretty much useless .. and anyway water temperature measurements are sparse. Much attention has focused on the resulting arctic “data hole”, because that area is known to have been warming very rapidly. The instrumental series differ mostly in how they treat gaps in the data (the arctic, plus parts of the antarctic and north Africa).

a) NASA GISTEMP

The oldest of the instrumental estimates by the NASA Goddard Institute for Space Studies uses latitude-longitude grid-based global averaging with relatively crude extrapolation to estimate temperatures for grid squares in the data holes.

Data from: http://data.giss.nasa.gov/gistemp/tabledata_v3/GLB.Ts+dSST.txt

b) HadCRUT

The most famous of the global temperature series is the one by the Climatic Research Unit at the University of East Anglia in cooperation with the Hadley Centre of the UK Met Office. It, too, uses grid-based averaging but with slightly different raw data correction. In HadCRUT, the data holes are just excluded from the average, resulting in slightly lower recent warming estimates than some other series, because of exclusion of the rapidly warming arctic.

Data is from: http://www.cru.uea.ac.uk/cru/data/temperature/HadCRUT4-gl.dat

c) HadCRUT + Cowtan & Way

In 2014 a chemist from the UK and a geographer from Canada, Kevin Cowtan and Robert Way, published revisions to the the HadCRUT series that use a sophisticated interpolation scheme to fill the data holes (called kriging, a procedure originally developed for mineral ore estimation). The plotted trace shows a combination of their “long kriging” series, which just interpolates the gaps, and their “hybrid UAH” series, which cunningly incorporates satellite remote sensing data into the post-1979 interpretation to further improve gap filling.

Data: http://www-users.york.ac.uk/~kdc3/papers/coverage2013/had4_krig_v2_0_0.txt
http://www-users.york.ac.uk/~kdc3/papers/coverage2013/had4_short_uah_v2_0_0.txt

d) JMA

The Japan Meteorological Agency also computes a global mean temperature estimate series using grid-based averaging. Like HadCRUT, the data holes are excluded from the average, with the effective coverage being even smaller (about 85%). They use their own sea surface temperature analysis (“COBE-SST“), which may be the source of their slightly spikier monthly estimates, particularly early in the record.

Data from: http://ds.data.jma.go.jp/tcc/tcc/products/gwp/temp/map/download.html

e) NOAA NCDC

The National Climate Data Center of the US National Oceanic and Atmospheric Administration also computes a global mean temperature estimate series using a grid-based averaging scheme.

Data from: https://www1.ncdc.noaa.gov/pub/data/noaaglobaltemp/operational/timeseries/aravg.mon.land_ocean.90S.90N.v4.0.1.201705.asc

f) Berkeley Earth

Unlike the others, Berkeley Earth uses a full kriging approach to interpolation, allowing them to incorporate more data from more sources into the global average. I expect the Berkeley Earth products to eventually become the default, go to, estimates of global temperature, but unfortunately they are infrequently updated at present.

Data from: http://berkeleyearth.lbl.gov/auto/Global/Land_and_Ocean_complete.txt

2. Reanalysis estimates

In meteorology, analysis is the procedure by which available weather data (mainly from weather stations) is assimilated to estimate the current synoptic situation — the “weather map”, among other things. Until recently (still, in many places²) that was done by the duty meteorologist drawing fat pencil lines on a big map of current weather station data.

In the numerical weather modelling era the definition and method have subtly changed. “Analysis” is still the term used for the best estimate of the current atmospheric state, but much information about it comes directly from the previous run of the operational weather model. The previous run provides the first estimate of the current analysis (the previous run’s prediction for the current time). That is then adjusted by an automated data assimilation procedure based on weather station and satellite data, the result being the analysis that is used as the starting conditions for the next run of the computer weather model³.

Once this approach was well established, it became obvious that it could also be applied in hindsight across the historical weather data record, producing a so-called reanalysis. By running a weather model across the historical record and re-assimilating the data at a suitable short time interval throughout (typically daily) one could produce an internally consistent, computationally satisfactory estimate of the state of the atmosphere for every hour of every day of the record. That has now been done by many groups at varying levels of sophistication.

The complete reanalysis of course provides a complete, full-coverage estimate of global temperature. The challenge is its applicability. Potentially the approach should be able to provide excellent estimates. For example, it tracks “real” weather systems across the data holes throughout the record and calculates temperatures for those systems at all times. The problem is that most reanalyses were not intended to provide high quality estimates of global temperature, and the degree to which the adopted data assimilation procedures provide such estimates is not always obvious.

I plot just one reanalysis product:

a) NCEP NCAR Reanalysis

The US National Centers for Environmental Prediction / University Corporation for Atmospheric Research reanalysis is one of the oldest and best known products. The data plotted is from version 1.

Data⁴: http://www.esrl.noaa.gov/psd/cgi-bin/data/timeseries/timeseries.pl?ntype=1&var=Air+Temperature&level=2000&lat1=-90&lat2=90&lon1=0&lon2=360&iseas=0&mon1=0&mon2=0&iarea=1&typeout=1&Submit=Create+Timeseries


3. Satellite remote sensing

A range of satellites have carried instruments that can estimate atmospheric temperature by measuring the microwave radiation emitted by oxygen molecules in the air. While this is superficially attractive, it has some serious drawbacks in practice:

  • The record is short, extending back to 1979
  • Multiple satellites with different instruments have been used over the years, and cross-calibration can be challenging
  • Instruments and satellite orbits change over time, again challenging calibration
  • Surface coverage is not quite 100%. (There are small data holes at the poles — much smaller than for the instrumental data.)
  • There are issues with interpretation in some regions, particularly near surface ice

Most importantly, atmospheric microwave emission does not provide a measurement of air temperature at the earth’s surface. The best that can be obtained is an estimate over the lower troposphere, about the bottom 5000 m of atmosphere. That temperature is, of course, much lower than the surface air temperature (about 25°C lower), and varies rather differently with climatic influences. For example, lower troposphere temperatures rose more sharply during the strong 1998 El Niño than did surface temperature.

Satellite data is not available for most of the reference interval used for the graph (1961-1990). Instead, I adjust the satellite temperature estimates approximately to that reference interval by offsetting their mean anomaly over the 30-year interval 1981-2010 to equal the average of the anomalies from the six instrumental series across that interval.

I plot the two best known satellite lower troposphere temperature anomaly estimate series:

a) RSS TLT

Remote Sensing Systems’ “temperature lower troposphere” (TLT) estimate has been relatively consistent historically. (It has suffered few errors and fixes over the years.)

Data from: http://data.remss.com/msu/monthly_time_series/RSS_Monthly_MSU_AMSU_Channel_TLT_Anomalies_Land_and_Ocean_v04_0.txt

b) UAH TLT

The University of Alabama in Huntsville, USA, provides another popular interpretation.

Data: http://www.nsstc.uah.edu/data/msu/v6.0/tlt/uahncdc_lt_6.0.txt


Future predictions

For comparison, I also plot the Intergovernmental Panel on Climate Change’s Fifth Assessment Report central estimates for 2050 for representative concentration pathways RCP6.0 (mid range) and RCP8.5 (high, business-as-usual).

Extrapolation of fitted functions can be dubious without a clear physical basis, but it is interesting that simple extrapolation of the fitted polynomial bisects the IPCC projections for 2050:

Global monthly temperatures extrapolated


Notes

  1. Not all are “surface” air temperatures in the meteorological sense, typically defined as the temperature at 1.2 – 2 m above ground level measured in a standard meteorological screen. The satellite series represent average lower troposphere temperature, centred on about 2000 – 3000 m elevation.
     
  2. Offices of the Australian Bureau of Meteorology still do routine manual analysis.
     
  3. Modern implementations are often more complex, assimilating different elements of data at different stages in the overall modelling sequence.
     
  4. This ESRL server is often off-line, or else extremely slow and unresponsive.


Source

The spreadsheet that produced this graph can be downloaded here: http://gergs.net/?attachment_id=1246


Also see:

(click for links)



Leave a Reply