Global monthly temperatures since 1850

Global monthly temperatures since 1850 -- instrumental, satellite and reanalysis

     (Refresh your browser for the latest plot)

This shows global monthly temperature anomalies since 1850 from ten sources spanning three different estimation approaches. The plotted values are temperature anomalies — the difference between a particular month’s temperature and the average for that month of the year across typically a 30-year reference interval. You might think that a standard reference interval would be used by all, but no, these are human researchers — they’re of course all over the place. For plotting I’ve adjusted the series to a common ‘pre-industrial’ baseline (~late 19th century temperature, taken as 0.7 °C below the 1981-2010 average). The overlaid trend is a 4th order polynomial fit intended as a visual guide based on the data, but in no sense claimed to be representative of underlying processes.

These are near-surface¹ air temperature estimates. The global values are area-weighted averages, meaning that the estimates attempt to give equal weight to each square kilometre of the earth’s surface when computing the global average. That is simple for some estimation approaches but challenging for others. The three main approaches to estimating global temperature represented in the plot are:

1. Instrumental measurement

The obvious approach is to collect all the global weather station temperature records and average them. Unfortunately weather stations are nearly all on land, and land covers only about 30% of the earth’s surface, so more data is need. Fortunately, seawater temperatures have been routinely recorded by ships at sea for well over a century, and that huge dataset provides a good estimate of near-surface air temperature for much of the other 70%. Buoy network measurements augment ship-based data in recent decades.

Ship-based sea surface temperature observation density (NOAA)

Current ARGO float oceanographic measurement network (Wikipedia)

The chief problems with the instrumental approach are of course with the 70%, not the 30% (despite denialist noise about land-based weather station data quality). They have to do with things like the method of seawater sampling (engine intake water vs bucket over the side vs special insulated buckets) and the means of correcting for those. The biggest issue lies with the sea ice covered areas in the arctic and antarctic, where the air is insulated from the ocean by the floating ice, so seawater temperature is pretty much useless .. and anyway water temperature measurements are sparse. Much attention has focused on the resulting arctic “data hole”, because that area is known to have been warming very rapidly. The instrumental series differ mostly in how they treat gaps in the data (the arctic, plus parts of the antarctic and north Africa).


The oldest of the instrumental estimates by the NASA Goddard Institute for Space Studies uses distance-weighted averaging within an equal-area grid, with relatively crude latitudinal interpolation to estimate temperatures across the data holes.

Data from:

b) HadCRUT

The most famous of the global temperature series is the one by the Climatic Research Unit at the University of East Anglia in cooperation with the Hadley Centre of the UK Met Office. It uses simple grid-based averaging but with different raw data correction. In HadCRUT, the data holes are just excluded from the average, resulting in slightly lower recent warming estimates than some other series because of exclusion of the rapidly warming arctic.

Data is from:

c) HadCRUT + Cowtan & Way

In 2014 a chemist from the UK and a geographer from Canada, Kevin Cowtan and Robert Way, published revisions to the the HadCRUT series that use a sophisticated interpolation scheme to fill the data holes (called kriging, a procedure originally developed for mineral ore estimation). The plotted trace shows a combination of their “long kriging” series, which just interpolates the gaps, and their “hybrid UAH” series, which cunningly incorporates satellite remote sensing data into the post-1979 interpretation to further improve gap filling.


d) JMA

The Japan Meteorological Agency also computes a global mean temperature estimate series using grid-based averaging. Like HadCRUT, the data holes are excluded from the average, with the effective coverage being even smaller (about 85%). They use their own sea surface temperature analysis (“COBE-SST“), which may be the source of their slightly spikier monthly estimates, particularly early in the record.

Data from:


The National Centers for Environmental Information of the US National Oceanic and Atmospheric Administration also computes a global mean temperature estimate series using a sophisticated grid-based averaging scheme.

Data from:

f) Berkeley Earth

Unlike the others, Berkeley Earth uses a full kriging approach to interpolation, allowing them to incorporate more data from more sources into the global average. I expect the Berkeley Earth products to eventually become the default, go to, estimates of global temperature.

Data from:

2. Reanalysis estimates

In meteorology, analysis is the procedure by which available weather data (mainly from weather stations) is assimilated to estimate the current synoptic situation — the “weather map”, among other things. Until recently (still, in many places²) that was done by the duty meteorologist drawing fat pencil lines on a big map of current weather station data.

In the numerical weather modelling era the definition and method have subtly changed. “Analysis” is still the term used for the best estimate of the current atmospheric state, but much information about it comes directly from the previous run of the operational weather model. The previous run provides the first estimate of the current analysis (the previous run’s prediction for the current time). That is then adjusted by an automated data assimilation procedure based on weather station and satellite data, the result being the analysis that is used as the starting conditions for the next run of the computer weather model³.

Once this approach was well established, it became obvious that it could also be applied in hindsight across the historical weather data record, producing a so-called reanalysis. By running a weather model across the historical record and re-assimilating the data at a suitable short time interval throughout (typically daily) one could produce an internally consistent, computationally satisfactory estimate of the state of the atmosphere for every hour of every day of the record. That has now been done by many groups at varying levels of sophistication.

The complete reanalysis of course provides a complete, full-coverage estimate of global temperature. The challenge is its applicability. Potentially the approach should be able to provide excellent estimates. For example, it tracks “real” weather systems across the data holes throughout the record and calculates temperatures for those systems at all times. The problem is that most reanalyses were not intended to provide high quality estimates of global temperature, and the degree to which the adopted data assimilation procedures provide such estimates is not always obvious.

I plot two reanalysis products:

a) NCEP NCAR Reanalysis

The US National Centers for Environmental Prediction / University Corporation for Atmospheric Research reanalysis is one of the oldest and best known products. The data plotted is from version 1.


a) Copernicus ERA-Interim

A product of ECMWF, the European Centre for Medium-Range Weather Forecasts, ERA-Interim is a fourth generation reanalysis product of substantially more sophistication than NCEP-NCAR v1. Unfortunately that means it requires satellite-era inputs, so it’s only available for 1979 onwards.

The ‘Copernicus ERA-Interim’ global temperature series based on it has attracted much attention of late as the first reanalysis-based series with convincing credentials. It’s arguably the best current estimate of the global atmospheric temperature field.


3. Satellite remote sensing

A range of satellites have carried instruments that can estimate atmospheric temperature by measuring the microwave radiation emitted by oxygen molecules in the air. While this is superficially attractive, it has some serious drawbacks in practice:

  • The record is short, extending back to 1979
  • Multiple satellites with different instruments have been used over the years, and cross-calibration can be challenging
  • Instruments and satellite orbits change over time, again challenging calibration
  • Surface coverage is not quite 100%. (There are small data holes at the poles — much smaller than for the instrumental data.)
  • There are issues with interpretation in some regions, particularly near surface ice

Most importantly, atmospheric microwave emission does not provide a measurement of air temperature at the earth’s surface. The best that can be obtained is an estimate over the lower troposphere, about the bottom 5000 m of atmosphere. That temperature is, of course, much lower than the surface air temperature (about 25°C lower), and varies rather differently with climatic influences. For example, lower troposphere temperatures rose more sharply during the strong 1998 El Niño than did surface temperature.

I plot the two best known satellite lower troposphere temperature anomaly estimate series:


Remote Sensing Systems’ “temperature lower troposphere” (TLT) estimate has been relatively consistent historically. (It has suffered few errors and fixes over the years.)

Data from:


The University of Alabama in Huntsville, USA, provides another popular interpretation.



These series use a variety of baselines to define temperature anomalies: 1981-2010 (Copernicus, UAH TLT), 1961-1990 (HadCRUT), 1951-1980 (GISTEMP), 1971-2000 (NOAA, JMA), etc. They have been rebased for plotting as follows:

  • If a series is not stated to be 1981-2010 based, it is rebased to that interval using the difference between the (generally 30-year) all-months mean across its baseline and its all-months mean across 1981-2010. (Rebasing is annual not separately by calendar month, and is intrinsic — it does not depend at all on other series.)
  • All of the anomaly series are then rebased from 1981-2010 to ‘pre-industrial’ by adding a uniform 0.7 °C, following the plotting approach adopted by Copernicus, for which they cite support from Hawkins et al 2017 (5. below).

Future predictions

For comparison, I also plot the Intergovernmental Panel on Climate Change’s Fifth Assessment Report central estimates for 2050 for representative concentration pathways RCP6.0 (mid range) and RCP8.5 (high, business-as-usual).

Extrapolation of fitted functions can be dubious without a clear physical basis, but it is interesting that simple extrapolation of the fitted polynomial bisects the IPCC projections for 2050:

Global monthly temperatures extrapolated

Notes and references

  1. Not all are “surface” air temperatures in the meteorological sense, typically defined as the temperature at 1.2 – 2 m above ground level measured in a standard meteorological screen. The satellite series represent average lower troposphere temperature, centred on about 2000 – 3000 m elevation.
  2. Offices of the Australian Bureau of Meteorology still do routine manual analysis.
  3. Modern implementations are often more complex, assimilating different elements of data at different stages in the overall modelling sequence.
  4. This ESRL server is often off-line, or else extremely slow and unresponsive.
  5. Hawkins, Ed, Pablo Ortega, Emma Suckling, Andrew Schurer, Gabi Hegerl, Phil Jones, Manoj Joshi et al. “Estimating changes in global temperature since the pre-industrial period.” Bulletin of the American Meteorological Society 2017.


The spreadsheet that produced this graph can be downloaded here:

Also see:

(click for links)

Leave a Reply

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>