Models & Statistics: A Toxic Combination

In 2013 a group of climate researchers published a study using statistics and the output of the latest crop of climate models. Their purpose was to show when surface temperatures could be expected to permanently depart from previous historical ranges. Such an event is called an expulsion. Camilo Mora et al. presented precise projections for when these unprecedented regional climates would emerge. Now a second group of researchers argue that their methodology produces artificially early dates at which specific regions will permanently experience unprecedented climates and artificially low uncertainty in those dates everywhere. This is an example of what happens when untrustworthy model outputs are combined with specious statistical methods. The resulting predictions are scary enough to be published in a major journal, but so false that even other climate scientists are moved to protest.

In a paper published in the journal Nature in October of 2013, Camilo Mora et al. estimated when permanent departures—or expulsions—from the historical range of variability will start for various geographic regions. That paper, titled “The projected timing of climate departure from recent variability,” was an attempt to nail down just when global warming would finally cause “unprecedented” and “irreversible” temperatures around the world. In other words, a prediction of when the dire warnings of the climate catastrophist crowd would actually start to come true. Here is how the authors framed the study in the paper's abstract:

Ecological and societal disruptions by modern climate change are critically determined by the time frame over which climates shift beyond historical analogues. Here we present a new index of the year when the projected mean climate of a given location moves to a state continuously outside the bounds of historical variability under alternative greenhouse gas emissions scenarios. Using 1860 to 2005 as the historical period, this index has a global mean of 2069 (±18 years s.d.) for near-surface air temperature under an emissions stabilization scenario and 2047 (±14 years s.d.) under a ‘business-as-usual’ scenario. Unprecedented climates will occur earliest in the tropics and among low-income countries, highlighting the vulnerability of global biodiversity and the limited governmental capacity to respond to the impacts of climate change. Our findings shed light on the urgency of mitigating greenhouse gas emissions if climates potentially harmful to biodiversity and society are to be prevented.

The authors asserted that, although several studies have documented the areas on Earth where greenhouse gas emissions will cause things to go to hell in a hand-basket, climate science still lacked a precise indication of the time at which the long predicted ravages of climate change will assert themselves. According to the researchers, changing climate would affect the following: “human welfare, through changes in the supply of food and water; human health, through wider spread of infectious vector-borne diseases, through heat stress and through mental illness; the economy, through changes in goods and services; and national security as a result of population shifts, heightened competition for natural resources, violent conflict and geopolitical instability.”

The chosen time for declaring the alarmists right is when the climate of a given location shifts wholly outside the range of historical precedents. To generate the actual doomsday predictions they developed an index that determines the year when the values of a given climatic variable exceed its historical bounds. Mora et al. analysed five climate variables for the atmosphere and two for the oceans, but settled on mean annual near-surface air temperature as the primary indicator of climate. A News & Views article, “Climate science: Expulsion from history” by Scott B. Power, appearing in Nature online, describes the expulsion concept this way.

In this idealized example, temperature at a particular site (dark blue line, measured in °C) is plotted for the years 1860 to 2100. This graph represents the sum of natural temperature variations (light blue line) and of underlying warming associated with human activities (red line). Grey dashed lines indicate the range of variability over the historical period (1860–2005). The temperature remains permanently outside the historical range after time tE (the expulsion time; black arrow). The red arrow indicates risks associated with climate change; the depth of shading represents the number and severity of risks, and correlates with the underlying warming. Mora et al. reported that expulsions for various climate variables can occur for many locations under 'business-as-usual' scenarios of greenhouse-gas emissions. Hawkins et al. now provide better estimates of tE.

As alluded to in the News & Views article, Ed Hawkins and colleagues now dispute these estimates in a Brief Communications Arising published on Nature's website, and question the magnitude of the associated uncertainty provided by Mora and co-workers. Below is the summary from the BCA:

The question of when the signal of climate change will emerge from the background noise of climate variability—the ‘time of emergence’—is potentially important for adaptation planning. Mora et al. presented precise projections of the time of emergence of unprecedented regional climates. However, their methodology produces artificially early dates at which specific regions will permanently experience unprecedented climates and artificially low uncertainty in those dates everywhere. This overconfidence could impair the effectiveness of climate risk management decisions.

The bold emphasis added to highlight the seminal points—the estimates are wrong. This is pretty strong stuff for a scientific discussion being waged in the pages of a respected and somewhat staid scientific journal like Nature. According to Hawkins et al., several methodological oversights contribute to the erroneous uncertainty quantification. The major points are listed below, taken verbatim from the communication:

  • First, Mora et al. ignore the possibility that emergence dates before the end of the simulations are not permanent deviations from the historical range6 (termed ‘pseudo-emergence’). In many regions where emergence has not occurred by the year 2100, Mora et al. artificially set the emergence date to equal 2100. This oversight produces several effects, including: (1) early and overconfident estimates of regional temperature emergence; and (2) implausible emergence dates for precipitation of exactly 2100 with zero uncertainty almost everywhere.

  • Second, Mora et al. estimate precision of regional emergence timing using the standard error of the ensemble mean (σ/√N), where N ( = 39) is the number of simulations and σ is their standard deviation. While the estimate of the ensemble-mean becomes more precise with larger ensemble size, natural fluctuations of the climate (such as El Niño) dictate that the future evolution of climate will not behave like the ensemble mean, but as a single realization from a range of outcomes. The use of σ/√N greatly underestimates this irreducible uncertainty, as well as the climate-response uncertainty given by the inter-model spread, and is therefore inappropriate for use in emergence estimates.

  • In this ensemble, 61% of the planet exhibits the possibility of post-2100 emergence, thwarting the calculation of mean emergence and biological impacts in these regions (including Amazonia and the Southern Ocean which are in the biodiversity hotspots of ref. 1). In addition, the standard errors (as used by Mora et al.) are less than 6 years everywhere, whereas the irreducible 16–84% uncertainty range is more than 6 years everywhere, and 75% of the planet has a 16–84% range of more than 20 years.

The authors also mention that there has been a lot of previous study of this subject, none of which is cited in the Mora et al. paper. To further illustrate their point, Hawkins et al. provide the following graph:

Lower panel: the cumulative fraction of the planet that has emerged by any particular year for 13 different GCMs when restricting the simulations to end in 2100 (solid lines) and in 2300 (dashed lines). The grey shaded region highlights that the end-of-simulation pseudo-emergence effect probably also affects the post-2100 emergence dates after about 2250. For CSIRO Mk3.6 (black curve), spatial variations in the grid-point emergence values are given by the global mean ± 1σ (black circle and bar) and a more appropriate 16–84% range of emergence times in which 68% of the grid points lie (black bar and star). The grey shading around the black curve represents the range in coverage for the period 2000–2100 amongst the 30 CSIRO Mk3.6 simulations analyzed in other illustrations. Upper panel: the year of global emergence using means and data up to 2100 (colored circles), medians and data up to 2100 (colored squares), and medians and data up to 2300 (colored stars), showing a substantial delay for several models which do not show median emergence until well after 2100.

Mora et al. concede the first point in their response to Hawkins and colleagues' arguments (also published on Nature's website), but not the second. “In the accompanying Comment, Hawkins et al. suggest that our index of the projected timing of climate departure from recent variability is biased to occur too early and is given with overestimated confidence,” they state. “We contest their assertions and maintain that our findings are conservative and remain unaltered in light of their analysis.”

What I find interesting is that they are arguing over dates, however derived, that originate with models that have not correctly predicted the last 17 years of climate change. The recent past has not been a time of temperature rise, but decline. Over the entire period used in the baseline analysis the computer models wildly overestimate the temperature rise. The analysis by both teams are demonstrations of GIGO: garbage in, garbage out. Effectively, they are doing the scientific equivalent of rearranging the deckchairs on the Titanic. The doomed, sinking ship is the theory of CO2 induced anthropogenic global warming.

This entire brouhaha, still raging in the pages of Nature, shows what happens when science is removed too far from reality. Neither these papers nor the commentary article mean a thing, because they are all based on meaningless data. Erroneous models and obfuscating statistical assumptions about their useless output provide a toxic combination that has poisoned the minds of so many climate scientists—they squabble over the finer points of analyzing crap.

Be safe, enjoy the interglacial and stay skeptical.

The Stagell Interglacial (MIS 11)

You may be interested in this article.

https://markgelbart.wordpress.com/2012/11/12/the-stagell-interglacial-mi...

The earth is now in a prolonged round orbit around the sun. This complicates climate scientist's efforts to discern the difference between natural forcing and anthopogenic influence on earth's climate.

I could find no mention of the stagell interglacial on your blog.

MIS 11

That is what the search box is for. For example, as I said more than three years ago:

Based on the lengths of previous interglacials, the Holocene warming may have anywhere from 5,000 to 40,000 years left to go. The presence of similar orbital configurations and comparable atmospheric greenhouse gas concentrations have led a number of scientists to the suggestion that Marine Isotope Stage 11 (MIS 11) is the previous interglacial that most closely matches the Holocene. This means that we could, indeed, see a 10 or 20 meter rise in ocean levels—but this would be due primarily to natural causes, and there isn't a damned thing we can do to stop it. There will be aberrations but nature's schedule cannot be altered by our pitiful efforts, climate change will unfold in the fullness of time with or without our participation.

see: Ask me for anything but time

Models & Statistics: A Toxic Combination

A very interesting article indeed. Well done.