O Say Can You CO2…

Guest Commentary by Scott Denning

The Orbiting Carbon Observatory (OCO-2) was launched in 2014 to make fine-scale measurements of the total column concentration of CO2 in the atmosphere. As luck would have it, the initial couple of years of data from OCO-2 documented a period with the fastest rate of CO2 increase ever measured, more than 3 ppm per year (Jacobson et al, 2016;Wang et al, 2017) during a huge El Niño event that also saw global temperatures spike to record levels.

As part of a series of OCO-2 papers being published this week, a new Science paper by Junjie Liu and colleagues used NASA’s comprehensive Carbon Monitoring System to analyze millions of measurements from OCO-2 and other satellites to map the impact of the 2015-16 El Niño on sources and sinks of CO2, providing insight into the mechanisms controlling carbon-climate feedback.

Uncertainty in Carbon-Climate Feedbacks is important

We’ve known for decades (Rayner et al, 1999) that El Niño influences the productivity of tropical forests and therefore CO2, but we had very few direct observations of the effects because they are so remote. Field experiments on the ground and aircraft profiling of CO2 over tropical forests have documented the impact of heat and drought on forest productivity, but they are few and far between. Vigorous convective mixing in the deep tropics also dilutes changes in near-surface CO2 much more than at higher latitudes, so low-altitude sampling contains relatively less information about carbon sources and sinks.

A subset of Earth System Models (ESMs) project that El Niño-like conditions will progressively increase in coming decades as sea-surface temperatures in the tropical Pacific warm, implying increased drought and forest dieback in the Amazon. The drought-induced decline of carbon-dense tropical forests and their replacement by lower-carbon savannas would release enormous amounts of CO2 to the atmosphere, amplifying global warming far beyond the effects of just the CO2 released by burning fossil fuels. In the CMIP5 suite of ESMs summarized by the IPCC Fifth Assessment Report, models forced with identical fossil fuel emissions differed by as much as 350 ppm of CO2 in 2100 due to differences in feedback between climate and the carbon cycle (Hoffmann et al, 2014). The radiative forcing of climate in these ESMs differed by up to 1.5 W/m2, with much of the disparity being driven by interactions among warming oceans, atmospheric circulation, and tropical forests. The climate outcomes due to differences carbon-climate feedback are as different as those arising from different future emission scenarios (RCP6 compared to RCP4.5) or from differences in clouds and aerosols in atmospheric models.

The NASA Carbon Monitoring System

NASA’s Carbon Monitoring System (CMS) combines mechanistic “forward” models and empirical “inverse” models of atmospheric CO2 and other variables using a technique called “data assimilation” that is closely analogous to operational weather forecasting (Bowman et al, 2017). The forward models include emissions of CO2 and carbon monoxide (CO) from fossil fuel burning and wildfires; air-sea gas exchange; and photosynthesis, respiration, and decomposition on land. These simulated emissions are then used as input to a model (GEOS-Chem) that uses high-resolution weather data atmospheric transport of CO2 and CO by winds, clouds, and turbulence. The resulting 3D simulations are then sampled at the locations of OCO-2 observations to determine the error in the forward model of atmospheric variations. An “adjoint” of the atmospheric transport model is then run backward in time to quantify the contributions of errors in specified surface sources and sinks of CO2 and CO to the mismatches between the forward models and the satellite observations.

OCO-2 has given us two revolutionary new ways to understand the effects of drought and heat on tropical forests. The instrument directly measures CO2 over these regions thousands of times every day (Crisp et al, 2004). These column-averaged concentration retrievals respond to the net amount of CO2 passing in and out of the atmosphere under the instrument. OCO-2 also senses the rate of photosynthesis by detecting fluorescent chlorophyll in the trees themselves (Frankenberg et al, 2011). Liu et al used observations of CO from the MOPITT instrument aboard NASA’s Terra satellite to identify CO2 released from upwind wildfires. They used solar-induced chlorophyll fluorescence (SIF) to quantify changes in plant photosynthesis (also called gross primary production, GPP). Their results include time-resolved maps of the sources and sinks of atmospheric CO2 that are optimally consistent with both mechanistic forward models and the CO2, CO, and SIF observed by the satellite instruments.



Fig. Extreme heat and drought impacted the carbon cycle in tropical forests differently in different regions, leading to the fastest growth rate of CO2 in at least 10,000 years. (NASA/JPL-Caltech).

Carbon-Climate Feedback During the 2015-16 El Niño

As previously reported based on in-situ data, the rate of increase in atmospheric CO2 during the strong El Niño in 2015-16 was about 3 ppm/yr compared with ~2 ppm/yr in recent decades. This is the fastest increase in CO2 ever observed, and plausibly the fastest since the end of deglaciation 10,000 years ago. Yet this rapid increase in CO2 occurred during a period when fossil fuel emissions were nearly flat (though still massively more than the biosphere and ocean can quickly absorb).

Liu et al found that 80% of the extra CO2 in the atmosphere during this period originated in tropical forests. Relative to a more normal year (2011), they found that tropical forests lost about 2.5 billion tons of carbon (GtC) in 2015-16. (1 Gt = 1012 kg is the mass of 1 cubic km of water, and 1 GtC produces about 2.12 ppm of CO2 in the air).

During the huge El Niño, parts of the Amazon experienced the driest conditions in at least 30 years as well as unusually warm temperatures. Changes in column-averaged CO2 and in SIF showed that these hot, dry conditions suppressed gross primary production (GPP, photosynthesis), leading to a reduction of about 0.9 GtC/yr. Equatorial Africa also experienced extreme heat, but precipitation was near normal. Impacts on GPP were not significant, but respiration and decomposition were enhanced by about 0.8 GtC/yr. In Hot dry conditions in Indonesia during the period led to an increase in fires, including a large peat fire that burned huge amounts of stored carbon. Emissions due to these fires showed up in the observations as increases in both CO2 and CO, and were estimated at about 0.8 GtC/yr.

These results help us understand how drought and heat affect these forests, some of the most productive ecosystems on Earth. The Amazon has experienced three extreme droughts in the past 11 years, in 2005, 2010, and now 2015-16. These extreme events have occurred more frequently than they did in the previous century. Understanding how the tropical forest responds to big droughts and heat waves help us to evaluate the strength of carbon-climate feedback in ESMs, allowing us to better understand and predict climate change over coming decades. The new results show that each of the major tropical forest regions experienced different combinations of heat and drought during the recent El Niño, so their carbon cycles responded in different ways, but the net result was increased emissions in all cases. Based on these results, further warming and drying of tropical forests is expected to result in less uptake and more release of carbon on land, unfortunately amplifying the effect of fossil fuel emissions warming the climate.

References


  1. J. Wang, N. Zeng, M. Wang, F. Jiang, H. Wang, and Z. Jiang, “Contrasting terrestrial carbon cycle responses to the two strongest El Niño events: 1997–98 and 2015–16 El Niños”, Earth System Dynamics Discussions, pp. 1-32, 2017. http://dx.doi.org/10.5194/esd-2017-46


  2. J. Liu, K.W. Bowman, D.S. Schimel, N.C. Parazoo, Z. Jiang, M. Lee, A.A. Bloom, D. Wunch, C. Frankenberg, Y. Sun, C.W. O’Dell, K.R. Gurney, D. Menemenlis, M. Gierach, D. Crisp, and A. Eldering, “Contrasting carbon cycle responses of the tropical continents to the 2015–2016 El Niño”, Science, vol. 358, pp. eaam5690, 2017. http://dx.doi.org/10.1126/science.aam5690


  3. P.J. Rayner, R.M. Law, and R. Dargaville, “The relationship between tropical CO2fluxes and the El Niño-Southern Oscillation”, Geophysical Research Letters, vol. 26, pp. 493-496, 1999. http://dx.doi.org/10.1029/1999GL900008


  4. “Climate Change 2013 – The Physical Science Basis”, 2009. http://dx.doi.org/10.1017/CBO9781107415324


  5. K.W. Bowman, J. Liu, A.A. Bloom, N.C. Parazoo, M. Lee, Z. Jiang, D. Menemenlis, M.M. Gierach, G.J. Collatz, K.R. Gurney, and D. Wunch, “Global and Brazilian carbon response to El Niño Modoki 2011-2010”, Earth and Space Science, 2017. http://dx.doi.org/10.1002/2016EA000204


  6. D. Crisp, R. Atlas, F. Breon, L. Brown, J. Burrows, P. Ciais, B. Connor, S. Doney, I. Fung, D. Jacob, C. Miller, D. O’Brien, S. Pawson, J. Randerson, P. Rayner, R. Salawitch, S. Sander, B. Sen, G. Stephens, P. Tans, G. Toon, P. Wennberg, S. Wofsy, Y. Yung, Z. Kuang, B. Chudasama, G. Sprague, B. Weiss, R. Pollock, D. Kenyon, and S. Schroll, “The Orbiting Carbon Observatory (OCO) mission”, Advances in Space Research, vol. 34, pp. 700-709, 2004. http://dx.doi.org/10.1016/j.asr.2003.08.062


  7. C. Frankenberg, C. O’Dell, J. Berry, L. Guanter, J. Joiner, P. Köhler, R. Pollock, and T.E. Taylor, “Prospects for chlorophyll fluorescence remote sensing from the Orbiting Carbon Observatory-2”, Remote Sensing of Environment, vol. 147, pp. 1-12, 2014. http://dx.doi.org/10.1016/j.rse.2014.02.007

Forest Digest: Week of October 2, 2017

October 8th, 2017|Tags: , , , , , , |0 Comments

.fusion-fullwidth-1 {
padding-left: px !important;
padding-right: px !important;
}

Credit: Lip Kee, used under CC BY-SA 2.0

Find out the latest in forest news!

In a Body Farm for Trees, Scientists Root Out the KillersScientific American

Researchers in Sequoia National Park have been studying trees for decades, trying to establish a pattern on what causes certain redwoods to die and others to survive. Since 1982, scientists have studied pre-marked plots containing 30,000 trees, marking their conditions every year, measuring their diameter every five years and conducting autopsies when they die. “It’s a detective game,” says U.S. Geological Survey Ecologist Nate Stephenson. Researchers believe the data being collected will prove invaluable, and changing global conditions create uncertain futures for many forests.

Study points to win-win for spotted owls and forest management – University of Washington

A new study from the University of Washington, the University of California, Davis and the U.S. Forest Service Pacific Southwest Research Station suggests that protecting the habitat of the spotted owl may be much easier than previously thought. For the past 25 years, the strategy surrounding habitat management has focused on preserving dense upper canopy cover, which put forests at a much higher risk for wildfire decimation. However, using new technology, researchers were able to determine that it is actually only the canopy cover of tall trees that are required, rather than total canopy cover. This will allow for better forestry management that can benefit both spotted owls and the trees they inhabit.

Tropical Forests Now Emit More Carbon Than They Soak Up — PBS

New research on the density of tropical forests has concluded that they are releasing more carbon dioxide (CO2) than they are absorbing, rather than the opposite, as was previously thought. A new study found that previous models of tropical forests overestimated the density of trees, mostly because partial logging by humans and forest degradation had worn away many trees, but left the canopy intact, making satellite measurements inaccurate. Trees absorb CO2, but fallen and dead trees also release it into the atmosphere. This new study measured smaller-scale tree loss, and has found that tropical forests are now a net source of carbon due to human activity.

Hurricane Irma damaged largest gumbo limbo tree in U.S.Bradenton Herald

After Hurricane Irma swept through South Florida, arborists discovered two new cracks in the 45-foot-tall champion gumbo limbo (Bursera sumaruba), located in De Soto National Memorial, Fla. The tree has been certified as an American Forests Champion Tree since 2007, meaning it is the largest-known tree of its kind in the country. The 80-year-old tree did not lose any limbs, but no amount of damage is good, considering that wire cables have been needed to support the tree for the past three years as a result of previous damage. The tree’s final prognosis is unknown, as arborists are still determining whether the damage will prove fatal.

Companies’ ‘zero deforestation’ pledges: everything you need to knowThe Guardian

Individual companies pledging to engage in “zero deforestation” can be a good first step, but it doesn’t necessary mean what you think. For example, a company’s practices can clear forests and still say they have “zero net deforestation” so long as they plant an equivalent area of trees. Other limitations can include difficulty in tracking the forestry management practices of smaller, third party distributors that work with the corporation. There is even disagreement on what the technical classification of a “forest” is, as the legal definition differs from country to county.

Video: How to Photograph Trees, Mushrooms and Rivers | Woodland Photography Tips — NatureTTL

Learn from nature photographer Ross Hoddinott his tips for taking pictures of rivers, trees and mushrooms.

The post Forest Digest: Week of October 2, 2017 appeared first on American Forests.

Hurricanes and Trees: It’s Complicated

October 5th, 2017|Tags: , , , , , , |0 Comments

.fusion-fullwidth-1 {
padding-left: px !important;
padding-right: px !important;
}

By Ian Leahy, Director of Urban Forest Programs

When I took a city government job caring for trees after having run my own business, I thought life in government would be a walk in the park, so to speak. No more payroll to meet, or sleepless nights about finding new clients.

Then the first major storm hit while I was on 24-hour call.

Working on behalf of the citizens of Washington, D.C., I found myself racing through nearly empty city streets at 3:00 a.m., scanning for any trees that might be coming down on us from above in the wet soil and wind. It was a long and stressful night, feeling the weight of the city’s safety on my shoulders.

This was not a one-time occurrence. We were in all-hands-on-deck mode after hurricanes, derechos and ice storms, working day and night to get trees out of streets, off of smashed cars, and sometimes even out of bedrooms. Each step, we scanned intensely with an eye out for downed live power lines.

As the city would become functional again, we would inevitably be inundated with calls from residents to remove perfectly healthy street trees that survived the storm. They saw the damage trees could do and, perhaps understandably, wanted no part of it.

So where do trees fit as a public safety measure in the face of extreme weather?

Trees and Community Resilience

The relationship between trees and storms is a complicated one, as evidenced by images from Puerto Rico and other Caribbean islands of entire forests stripped of their leaves overnight.

At first glance, a community could impulsively conclude that trees aren’t worth the hassle, at least outside of parks. But, such a community would be stripping itself of a vast array of benefits, from air quality and reduced urban heat island to improved academic performance in children, reduced hospital recovery times and, yes, even — perhaps especially — stormwater management.

When trees filtering water are removed, time and again massive flooding that otherwise would not have happened on such a scale is unleashed, causing crippling damage to people’s lives and the economy.

Lessons Learned about Wind and Trees

So how do we balance these very real risks with such prolific benefits? The key is to actually double down on and enhance a city’s tree canopy. Scientists at the University of Florida tracked the impacts 10 hurricanes had on the urban forests where they hit, from Andrew that devastated South Florida in 1992 to the infamous Katrina along the Gulf Coast in 2005.

While they found that increased wind speed did increase the likelihood that trees would fail, other factors significantly impacted the degree of damage to a city’s tree canopy during a hurricane:

  • Trees in groups survive wind better than individuals
  • Some species resist wind better than others
  • Trees that lose their leaves during a hurricane are not necessarily dead
  • Better and deeper soils mean fewer tree failures
  • Native trees survive better
  • Older and unhealthy trees are more likely to be damaged
  • Well-pruned trees survive hurricanes better

Tree damage in South Florida from Hurricane Irma in 2017. Credit: Jim Mullhaupt

All of this demonstrates the life, infrastructure, and economy-saving importance of cities investing adequately in an urban forestry program. This includes hiring technical expertise and giving forestry a seat at the planning table when decisions are made about every aspect of the city’s built environment.

While risk of tree failure can never be completely eliminated, going all-in with a truly comprehensive urban forestry program would reduce risk significantly by:

  • Developing and implementing a comprehensive urban forestry plan
  • Conducting structural pruning for both young and mature trees
  • Planting more wind- and salt-resistant species
  • Selecting the right species and designing the right place, with adequate soil volume
  • Planting high-quality trees with central leaders and good structure
  • Assembling an urban forestry strike team to deploy in the wake of disasters

Vibrant Cities Lab

To help communities of any size build such a comprehensive urban forestry program, American Forests, the National Association of Regional Councils (NARC) and the U.S. Forest Service recently launched the Vibrant Cities Lab. This free online hub is a unique portal of urban forestry research and expertise to help city managers and other professions integrate trees into their decision-making processes.

The Vibrant Cities Lab includes a step-by-step guide so that any community can assess where they currently are and access the technical resources necessary to enhance their urban forestry capacity. The site synthesizes the latest research showing impacts trees have, provides best practices from communities of all sizes, and curates nearly 500 resources, such as technical guides, ordinances and sample urban forestry plans.

Disaster ReLeaf Funds in Miami and Houston

American Forests has also been working for several years in Miami and Houston, two cities recently devastated by hurricanes, to help develop just this kind of holistic urban forestry capacity that can build community resilience. With the support of Bank of America, Alliance Data, Coca-Cola, Bacardi and the U.S. Forest Service, among others, our award-winning Community ReLeaf program works in metro areas nationwide to increase local capacity through a comprehensive theory of change model:

While these partnerships in Houston and Miami have looked to build a strong and expanded tree canopy for the future, focused on underserved areas, the damage inflicted on each city’s trees by Hurricanes Harvey and Irma have created new and urgent work that complicates the way forward. We need to update our data and planning on local tree canopy, identify repair and restoration needs, and incorporate lessons learned from these storms to identify how trees can be used to prepare for future events.

To help fund this new work with our partners, American Forests has launched a Disaster ReLeaf Fund for each city. We are also expanding our planting and maintenance efforts so both Miami and Houston can recover their beneficial tree canopy as quickly as possible.

Whether in a large metro area or a small town, proactive management can help your community’s trees not only better withstand hurricanes, but also become an asset for filtering water and reducing impervious surfaces at a time when those functions are most desperately needed.

The post Hurricanes and Trees: It’s Complicated appeared first on American Forests.

1.5ºC: Geophysically impossible or not?

Guest commentary by Ben Sanderson

Millar et al’s recent paper in Nature Geoscience has provoked a lot of lively discussion, with the authors of the original paper releasing a statement to clarify that there paper did not suggest that “action to reduce greenhouse gas emissions is no longer urgent“, rather that 1.5ºC (above the pre-industrial) is not “geophysically impossible”.

The range of post-2014 allowable emissions for a 66% chance of not passing 1.5ºC in Millar et al of 200-240GtC implies that the planet would exceed the threshold after 2030 at current emissions levels, compared with the AR5 analysis which would imply most likely exceedance before 2020. Assuming the Millar numbers are correct changes 1.5ºC from fantasy to merely very difficult.

But is this statement overconfident? Last week’s post on Realclimate raised a couple of issues which imply that both the choice of observational dataset and the chosen pre-industrial baseline period can influence the conclusion of how much warming the Earth has experienced to date. Here, I consider three aspects of the analysis – and assess how they influence the conclusions of the study.


Figure 1: (a) shows temperature change in the CMIP5 simulations relative to observed temperature products. Grey regions show model range under historical and RCP8.5 forcing relative to a 1900-1940 baseline. Right-hand axis shows temperatures relative to 1861-1880 (offset using HadCRUT4 temperature difference). (b) shows temperature change as a function of cumulative emissions. Black solid line shows the CMIP5 historical mean, and black dashed is the RCP8.5 projection. Colored lines represent regression reconstructions as in Otto (2015) using observational temperatures from HadCRUT4 and GISTEMP, with cumulative emissions from the Global Carbon Project. Colored points show individual years from observations.

The choice of temperature data

We can illustrate how these effects might influence the Millar analysis by repeating the calculation with alternative temperature data. Their approach requires an estimate of the forced global mean temperature in a given year (excluding any natural variability), which are derived from Otto et al (2015), who employ a regression approach to reconstruct a prediction of global mean temperatures as a function of anthropogenic and natural forcing agents. In Fig. 1(a), we apply the Otto approach to data from GISTEMP as well as the HadCRUT4 product used in the original paper – again using data up to 2014. Although the HadCRUT4 forced Otto-style reconstruction suggests 2014 temperatures were less than the 25th percentile of the CMIP5 distribution, following the same procedure with GISTEMP yields 2014 temperatures of 1.08K – corresponding to the 58th percentile of the CMIP5 distribution.

This draws into question the justification for changing the baseline for the cumulative emissions analysis, given it quickly becomes apparent is that the use of a different dataset can undermine the conclusion that present day temperatures lie outside of the model distribution. Fig. 1(b) shows that the anomaly between observations and the CMIP5 mean temperature response to cumulative emissions is halved by repeating the Millar analysis with the GISTEMP product instead of HadCRUT.

The role of internal variability

There is also an important question of the degree to which internal variability can influence the attributable temperature change, given that the Millar result is contingent on knowing what the forced temperature response of the system is. We apply the approach to the CESM Large Ensemble, a 40-member initial condition ensemble of historical and future climate simulations where ensemble members differ only in their realization of natural variability. Although all models have identical forcing and model configuration, Fig 2(a) shows the range of estimated forced warming in 2014 in the CESM model by the Otto approach varies from 0.68-0.94K (almost as much as the actual 2005-2014 decadal average temperature itself in the CESM ensemble), and there is a strong correlation between inferred forced warming in 2014 and the global mean temperatures in the preceding decade, suggesting that the unforced estimate in the Otto approach can be strongly influenced by temperatures in the preceding decade.


Figure 2: (a) Temperatures of reconstructed global mean temperature in 2014 for the CESM large ensemble following the Otto (2015) regression methodology, plotted as a function of average global mean temperature in the years 2005-2014. (b) correlation between mean grid-point temperatures in 2005-2014 and reconstructed global mean temperature in 2014, ellipses show regions proposed for Pacific Climate Index. (c) CESM large ensemble reconstructed global mean temperature in 2014 as a function of Pacific Climate Index. Vertical lines show index values for observations in period 2005-2014 (solid) and historical (dashed).
In order to assess how this potential bias might have been manifested in the historical record, we construct an index of this pattern using regions of strong positive and negative correlation to the inferred forced warming. Fig 2(b) shows the correlation between inferred forced 2014 temperature and 2005-2014 temperatures, showing a pattern reminiscent of the Interdecadal Pacific Oscillation, a leading mode of unforced variability. The warming estimate is positively correlated with central Pacific temperatures, and negatively correlated with South Pacific temperatures. An index of the difference between these regions is shown in Fig. 1(c) for observations and models. Both HadCRUT and GISTEMP suggest strongly negative index values for the period 2005-2014, suggesting a potential cold bias in the warming estimate due to natural variability of 0.1˚C (with 5-95% values of 0.05-0.15˚C).

In short, irrespective of what observational dataset was used – it’s likely that an estimate of forced response made in 2014 would be biased cold, which on its own would translate to an overestimate of the available budget of about 40GtC.

The low CMIP5 compatible emissions

Millar’s paper also points out that the discrepancy between the CMIP5 ensemble and the observations arises not only due to temperature, but also because cumulative emissions were greater in the real world than the mean CMIP5 model in 2014. But this only translates to a justifiable increase in the emissions budget if the real world is demonstrably off the CMIP5 cumulative emissions/temperature line.   By some estimates, cumulative emissions in 2014 might be higher than the models simply be because emissions were consistently above the RCP range between 2005-2014. In other words – by 2014 we’d used more of the carbon budget than any of the RCPs had anticipated and if we are not confident that the real world is cooler than the models at this level of cumulative emissions, this means that available emissions for 1.5 degrees should decrease proportionately.

A key point to note is that, by resetting the cumulative emissions baseline, the Millar et al available emissions budget is insensitive to the actual cumulative emissions to date. The unforced temperature estimate is used as a proxy for what cumulative emissions should be given the current level of warming. This is only justified if we are confident that we know the current unforced temperature more accurately than we know the current cumulative emissions. However, the combined evidence of the influence of natural variability on the unforced temperature estimate, the disagreement between different observational datasets on warming level, and the uncertainty introduced by an uncertain pre-industrial temperature baseline means that we can’t be confident as the Millar paper suggests on what the current level of warming is, and that the balance of evidence suggests that the Otto warming estimate may be biased cold. If this is right, the Millar available cumulative emissions budget would be biased high.

So, is it appropriate to say that 1.5ºC is geophysically possible? Perhaps plausible would be a better word. Depending on which temperature dataset we choose, the TEB for 1.5 degrees may already be exceeded. Although it would certainly be useful to know what the underlying climate attractor of the Earth system is, any estimate we produce is subject to error.

We ultimately face a question of what we trust more: our estimate of our cumulative emissions to date combined with our full knowledge of how much warming that might imply, or an estimate of how warm the system was in 2014 which is subject to error due to observational uncertainty and natural variability. Changing the baseline for warming and cumulative emissions is effectively a bias correction, a statement that models have simulated the past sufficiently poorly that they warrant bias correction which allows for emissions to date to be swept under the carpet. Alternatively, we trust the cumulative emissions number and treat the models as full proxies for reality, as was done in AR5, which would tell us that the emissions to date have already brought us to the brink of exceedance of the 1.5 degree threshold.


Methods


For Figure 1, global mean temperatures are plotted from the HadCRUT4 and GISTEMP products relative to a 1900-1940 baseline, together with global mean temperatures from 81 available simulations in the CMIP5 archive, also relative to the 1900-1940 baseline, where all available ensemble members are taken for each model. In each year from 1900-2016, the 5th,25th,75th and 95th percentiles of the CMIP5 distribution are plotted. Figure 1(b) the CMIP5 cumulative emissions and temperature data used are identical to those in AR5 for the historical and RCP8.5 trajectories. Observational data is from GISTEMP and HadCRUT4 global mean products, and annual cumulative emission data is replicated from the Global Carbon Project. Regression analyses are performed as in Otto (2015), using natural and anthropogenic forcing timeseries (historical and the RCP8.5 scenario) with a regression constructed using data from 1850-2016 (for HadCRUT4), and from 1880-2016 (for GISTEMP).

Figure 2 uses data from the CESM large ensemble, where the Otto (2015) analysis is applied to each ensemble member. Regressions are performed using data from years 1850-2014 for each member of the archive, to replicate the years used in the Otto (2015) analysis. Figure 2(a) shows the regression reconstructed temperature for 2014 plotted as a function of model temperatures in the preceding decade (2005-2014). Figure 2(b) shows the correlation pattern of 2005-2014 temperatures with the 2014 regression reconstruction in the CESM Large Ensemble. Ellipses are constructed to approximately highlight regions of high positive and negative correlation in the pattern. A central Pacific ellipse is centered on 2N,212E, while a South Pacific ellipse is centered on 34S,220E. Figure 2(c) employs a Pacific Climate Index specific to this analysis, constructed using the difference of mean annual temperatures in the years 2005-2014 in the two ellipses in Figure 2(b) (central Pacific minus south Pacific region), showing reconstructed 2014 regression temperatures as a function of the index for each member of the CESM Large Ensemble. A linear regression was computed to predict 2014 Otto (2015) forced temperatures as a function of the Pacific Climate Index. Dashed lines show the 5th and 95th percentile uncertainty in the regression coefficients. The same index is then calculated for the 2005-2014 period and the historical 1880-2000 period in the HadCRUT4 and GISTEMP datasets.

…the Harde they fall.

Back in February we highlighted an obviously wrong paper by Harde which purported to scrutinize the carbon cycle. Well, thanks to a crowd sourced effort which we helped instigate, a comprehensive scrutiny of those claims has just been published. Lead by Peter Köhler, this included scientists from multiple disciplines working together to clearly report on the mistaken assumptions in the Harde paper.

The comment is excellent, and so should be well regarded, but the fact that it is a comment means that the effort will likely be sorely underappreciated. Part of problem is the long time for the process (almost 8 months) which means that the nonsense is mostly forgotten about by the time the comments are published. We’ve discussed trying to speed up and improve the process by having a specialized journal for comments and replications but really the problem here is the low quality of peer review and editorial supervision that allows these pre-rebunked papers to appear in the first place.

GPC is not the only (nor the worst) culprit for this kind of nonsense – indeed we just noticed a bunch of astrology papers in the International Journal of Heat and Technology (by Nicola Scatetta [natch]). It does seem to demonstrate that truly you can indeed publish anything somewhere.

References


  1. P. Köhler, J. Hauck, C. Völker, D.A. Wolf-Gladrow, M. Butzin, J.B. Halpern, K. Rice, and R.E. Zeebe, “Comment on “ Scrutinizing the carbon cycle and CO 2 residence time in the atmosphere ” by H. Harde”, Global and Planetary Change, 2017. http://dx.doi.org/10.1016/j.gloplacha.2017.09.015