1.5ºC: Geophysically impossible or not?

No votes yet.
Please wait...

Guest commentary by Ben Sanderson

Millar et al’s recent paper in Nature Geoscience has provoked a lot of lively discussion, with the authors of the original paper releasing a statement to clarify that there paper did not suggest that “action to reduce greenhouse gas emissions is no longer urgent“, rather that 1.5ºC (above the pre-industrial) is not “geophysically impossible”.

The range of post-2014 allowable emissions for a 66% chance of not passing 1.5ºC in Millar et al of 200-240GtC implies that the planet would exceed the threshold after 2030 at current emissions levels, compared with the AR5 analysis which would imply most likely exceedance before 2020. Assuming the Millar numbers are correct changes 1.5ºC from fantasy to merely very difficult.

But is this statement overconfident? Last week’s post on Realclimate raised a couple of issues which imply that both the choice of observational dataset and the chosen pre-industrial baseline period can influence the conclusion of how much warming the Earth has experienced to date. Here, I consider three aspects of the analysis – and assess how they influence the conclusions of the study.


Figure 1: (a) shows temperature change in the CMIP5 simulations relative to observed temperature products. Grey regions show model range under historical and RCP8.5 forcing relative to a 1900-1940 baseline. Right-hand axis shows temperatures relative to 1861-1880 (offset using HadCRUT4 temperature difference). (b) shows temperature change as a function of cumulative emissions. Black solid line shows the CMIP5 historical mean, and black dashed is the RCP8.5 projection. Colored lines represent regression reconstructions as in Otto (2015) using observational temperatures from HadCRUT4 and GISTEMP, with cumulative emissions from the Global Carbon Project. Colored points show individual years from observations.

The choice of temperature data

We can illustrate how these effects might influence the Millar analysis by repeating the calculation with alternative temperature data. Their approach requires an estimate of the forced global mean temperature in a given year (excluding any natural variability), which are derived from Otto et al (2015), who employ a regression approach to reconstruct a prediction of global mean temperatures as a function of anthropogenic and natural forcing agents. In Fig. 1(a), we apply the Otto approach to data from GISTEMP as well as the HadCRUT4 product used in the original paper – again using data up to 2014. Although the HadCRUT4 forced Otto-style reconstruction suggests 2014 temperatures were less than the 25th percentile of the CMIP5 distribution, following the same procedure with GISTEMP yields 2014 temperatures of 1.08K – corresponding to the 58th percentile of the CMIP5 distribution.

This draws into question the justification for changing the baseline for the cumulative emissions analysis, given it quickly becomes apparent is that the use of a different dataset can undermine the conclusion that present day temperatures lie outside of the model distribution. Fig. 1(b) shows that the anomaly between observations and the CMIP5 mean temperature response to cumulative emissions is halved by repeating the Millar analysis with the GISTEMP product instead of HadCRUT.

The role of internal variability

There is also an important question of the degree to which internal variability can influence the attributable temperature change, given that the Millar result is contingent on knowing what the forced temperature response of the system is. We apply the approach to the CESM Large Ensemble, a 40-member initial condition ensemble of historical and future climate simulations where ensemble members differ only in their realization of natural variability. Although all models have identical forcing and model configuration, Fig 2(a) shows the range of estimated forced warming in 2014 in the CESM model by the Otto approach varies from 0.68-0.94K (almost as much as the actual 2005-2014 decadal average temperature itself in the CESM ensemble), and there is a strong correlation between inferred forced warming in 2014 and the global mean temperatures in the preceding decade, suggesting that the unforced estimate in the Otto approach can be strongly influenced by temperatures in the preceding decade.


Figure 2: (a) Temperatures of reconstructed global mean temperature in 2014 for the CESM large ensemble following the Otto (2015) regression methodology, plotted as a function of average global mean temperature in the years 2005-2014. (b) correlation between mean grid-point temperatures in 2005-2014 and reconstructed global mean temperature in 2014, ellipses show regions proposed for Pacific Climate Index. (c) CESM large ensemble reconstructed global mean temperature in 2014 as a function of Pacific Climate Index. Vertical lines show index values for observations in period 2005-2014 (solid) and historical (dashed).
In order to assess how this potential bias might have been manifested in the historical record, we construct an index of this pattern using regions of strong positive and negative correlation to the inferred forced warming. Fig 2(b) shows the correlation between inferred forced 2014 temperature and 2005-2014 temperatures, showing a pattern reminiscent of the Interdecadal Pacific Oscillation, a leading mode of unforced variability. The warming estimate is positively correlated with central Pacific temperatures, and negatively correlated with South Pacific temperatures. An index of the difference between these regions is shown in Fig. 1(c) for observations and models. Both HadCRUT and GISTEMP suggest strongly negative index values for the period 2005-2014, suggesting a potential cold bias in the warming estimate due to natural variability of 0.1˚C (with 5-95% values of 0.05-0.15˚C).

In short, irrespective of what observational dataset was used – it’s likely that an estimate of forced response made in 2014 would be biased cold, which on its own would translate to an overestimate of the available budget of about 40GtC.

The low CMIP5 compatible emissions

Millar’s paper also points out that the discrepancy between the CMIP5 ensemble and the observations arises not only due to temperature, but also because cumulative emissions were greater in the real world than the mean CMIP5 model in 2014. But this only translates to a justifiable increase in the emissions budget if the real world is demonstrably off the CMIP5 cumulative emissions/temperature line.   By some estimates, cumulative emissions in 2014 might be higher than the models simply be because emissions were consistently above the RCP range between 2005-2014. In other words – by 2014 we’d used more of the carbon budget than any of the RCPs had anticipated and if we are not confident that the real world is cooler than the models at this level of cumulative emissions, this means that available emissions for 1.5 degrees should decrease proportionately.

A key point to note is that, by resetting the cumulative emissions baseline, the Millar et al available emissions budget is insensitive to the actual cumulative emissions to date. The unforced temperature estimate is used as a proxy for what cumulative emissions should be given the current level of warming. This is only justified if we are confident that we know the current unforced temperature more accurately than we know the current cumulative emissions. However, the combined evidence of the influence of natural variability on the unforced temperature estimate, the disagreement between different observational datasets on warming level, and the uncertainty introduced by an uncertain pre-industrial temperature baseline means that we can’t be confident as the Millar paper suggests on what the current level of warming is, and that the balance of evidence suggests that the Otto warming estimate may be biased cold. If this is right, the Millar available cumulative emissions budget would be biased high.

So, is it appropriate to say that 1.5ºC is geophysically possible? Perhaps plausible would be a better word. Depending on which temperature dataset we choose, the TEB for 1.5 degrees may already be exceeded. Although it would certainly be useful to know what the underlying climate attractor of the Earth system is, any estimate we produce is subject to error.

We ultimately face a question of what we trust more: our estimate of our cumulative emissions to date combined with our full knowledge of how much warming that might imply, or an estimate of how warm the system was in 2014 which is subject to error due to observational uncertainty and natural variability. Changing the baseline for warming and cumulative emissions is effectively a bias correction, a statement that models have simulated the past sufficiently poorly that they warrant bias correction which allows for emissions to date to be swept under the carpet. Alternatively, we trust the cumulative emissions number and treat the models as full proxies for reality, as was done in AR5, which would tell us that the emissions to date have already brought us to the brink of exceedance of the 1.5 degree threshold.


Methods


For Figure 1, global mean temperatures are plotted from the HadCRUT4 and GISTEMP products relative to a 1900-1940 baseline, together with global mean temperatures from 81 available simulations in the CMIP5 archive, also relative to the 1900-1940 baseline, where all available ensemble members are taken for each model. In each year from 1900-2016, the 5th,25th,75th and 95th percentiles of the CMIP5 distribution are plotted. Figure 1(b) the CMIP5 cumulative emissions and temperature data used are identical to those in AR5 for the historical and RCP8.5 trajectories. Observational data is from GISTEMP and HadCRUT4 global mean products, and annual cumulative emission data is replicated from the Global Carbon Project. Regression analyses are performed as in Otto (2015), using natural and anthropogenic forcing timeseries (historical and the RCP8.5 scenario) with a regression constructed using data from 1850-2016 (for HadCRUT4), and from 1880-2016 (for GISTEMP).

Figure 2 uses data from the CESM large ensemble, where the Otto (2015) analysis is applied to each ensemble member. Regressions are performed using data from years 1850-2014 for each member of the archive, to replicate the years used in the Otto (2015) analysis. Figure 2(a) shows the regression reconstructed temperature for 2014 plotted as a function of model temperatures in the preceding decade (2005-2014). Figure 2(b) shows the correlation pattern of 2005-2014 temperatures with the 2014 regression reconstruction in the CESM Large Ensemble. Ellipses are constructed to approximately highlight regions of high positive and negative correlation in the pattern. A central Pacific ellipse is centered on 2N,212E, while a South Pacific ellipse is centered on 34S,220E. Figure 2(c) employs a Pacific Climate Index specific to this analysis, constructed using the difference of mean annual temperatures in the years 2005-2014 in the two ellipses in Figure 2(b) (central Pacific minus south Pacific region), showing reconstructed 2014 regression temperatures as a function of the index for each member of the CESM Large Ensemble. A linear regression was computed to predict 2014 Otto (2015) forced temperatures as a function of the Pacific Climate Index. Dashed lines show the 5th and 95th percentile uncertainty in the regression coefficients. The same index is then calculated for the 2005-2014 period and the historical 1880-2000 period in the HadCRUT4 and GISTEMP datasets.

Published by

Site Owner

Website Owner

Leave a Reply

Your email address will not be published. Required fields are marked *