Sense and Sensitivity

One of the main problems with the “theory” of anthropogenic global warming is its reliance on rising atmospheric CO2 levels to force a global rise in temperature. This is predicted by climate change proponents by running large, complex computer models that imperfectly simulate the physics of Earths biosphere: ocean, land and atmosphere. Central to tuning these general circulation models (GCM) is a parameter called climate sensitivity, a value that purports to capture in a single number the response of global climate to a doubling of atmospheric carbon dioxide. But it has long been known that the Earth system is constantly changing—interactions shifting and factors waxing and waning—so how can a simple linear approximation capture the response of nature? The answer is, it can not, as a new perspective article in the journal Science reports.

Occasionally, someone one within the mainstream of climate science manages to publish a report that reveals what is widely known but hardly ever talked about: the story presented to the public regarding the cause of global warming is a gross over-simplification that is more useful in misleading the uninitiated than accurately predicting future climate change. In “A Long View on Climate Sensitivity,” Luke Skinner, from the Godwin Laboratory for Palaeoclimate Research at the University of Cambridge, has made one of these rare admissions of reality. To be sure, Dr. Skinner is an orthodox climate scientists and adheres to the articles of AGW faith. Here is the first paragraph of the perspective:

Humanity is engaged in an unprecedented climate experiment, the outcome of which is often framed in terms of an equilibrium “climate sensitivity.” This parameter encapsulates the amount of global warming that may be expected as a result of a doubling of the atmospheric carbon dioxide (CO2) concentration, which is equivalent to an additional 3.7 W m−2 of energy available to warm Earth's surface. The current best estimate of climate sensitivity is similar to the earliest estimates by Arrhenius and Callendar, ranging from 2° to 4.5°C. Constraints on the lower limit of this range are much tighter than they are on the upper limit, with small but finite probabilities for very large climate sensitivities. Although the geological record provides strong support for climate sensitivities in this range, it also reminds us that a single value of climate sensitivity is unlikely to provide a complete picture of the climate system's response to forcing.

The admonition to climate scientists cannot get much clearer that that last sentence—using a single value for sensitivity is unlikely to capture how climate responds to CO2. Of course, climate scientists have been roundly criticized for the misapplication of statistical techniques in the past (see “Climate Science's Dirtiest Secret”). A prime example of slipshod statical methodology is climate sensitivity and the linear model it implies.

Climate scientists have looked at past climate intervals—including the Last Glacial Maximum, late Pleistocene glacial-interglacial cycles, Pliocene, late Eocene, Paleocene-Eocene Thermal Maximum, and Cretaceous—and inferred a linear scaling between global radiative forcing and temperature response. Trouble is, in each of these cases the value of the scaling factor, the sensitivity, is different. The paleoclimate sensitivities implied by these various climate intervals range from <1°C to >7°C per 3.7 W m−2. This is illustrated by the plot below.


Estimates of past global average radiative forcing.

As can be seen, the sensitivity estimates shown here vary widely between 0.6°C and 6.5°C, but taken together imply a climate sensitivity of slightly less than 3°C. The solid black line is a linear regression on all the data, forced through the origin and with 95% confidence limits (dotted black lines). The author draws some comfort in that ∼3°C is in fairly close agreement with estimates from numerical models, though that comes with some caveats. “However, this agreement may mask evidence for nonlinear feedbacks and abrupt climatic transitions that are not captured in the climate sensitivity as commonly defined,” he states.

With all of this data yielding an average value in close agreement with what the GCM modelers are using where is the problem? Well first off, one should remember that an actual measure is practically never equal to its average. That is why rainfall is always above or below the seasonal average, as is the temperature. Second, climate data is what is called nonstationary data in statistics (see “Econometrics vs Climate Science”). What this means in plain terms is that the system being modeled is constantly changing, which implies that simple linear regressions do not apply and may be misleading. Though they don't talk about it much, many climate scientists recognize this fact, including the author:

The above difficulties apply especially to paleoclimate sensitivity studies, in which it can be difficult to distinguish between independent radiative forcings and dependent radiative feedbacks. Furthermore, many forcings and feedbacks (such as clouds) can be difficult or impossible to reconstruct, the time scale of sampling or averaging can range from decades to millennia, and the initial climate state and therefore the available feedbacks and climate-adjustment mechanisms will vary greatly.

...

The reduction or elimination of some of the major uncertainties that can affect paleoclimate sensitivity studies, such as temporal and spatial sampling resolution or proxy accuracy, will continue to progress. However, these uncertainties may not be the main challenge to the project of generalizing the concept of climate sensitivity. Rather, as illustrated, for example, by multimodel simulations of past “equilibrium” climate states, the main challenge may lie in the context dependence (nonlinearity) of many climate feedbacks and their varying response times.

Skinner goes on to provide an example of how the complexity of the system can overwhelm the naivety of using simple linear models. “The long-standing mystery of the late Pleistocene ice ages illustrates how a constant/universal linearization of global temperature versus radiative forcing can provide an overly narrow view of climate-adjustment processes,” the author states. “These climate cycles were clearly paced by a slowly evolving insolation field that did not produce significant global average radiative anomalies (at least under the usual assumption of a globally uniform response time).”

The Pleistocene mystery arises because scientists cannot identify a strong enough “forcing” that could drive the change in glacial-interglacial timing. The only forcing they have is varying solar irradiance, and this forcing can only be effective because of strong radiative feedbacks. The author, in a nod to consensus science, includes changes in atmospheric CO2 levels as a major feedback mechanism, but this may not be the case. Recently, a group of scientists have concluded that large changes in solar UV radiation can, indeed, affect climate by inducing atmospheric changes (see “Earth's Climate Follows The Sun's UV Groove”).

“The emergence of these feedbacks appears to have been strongly conditional on the prevailing climate state and may have also depended on the occurrence of abrupt ("irreversible") transitions in regional climate and the ocean circulation ,” adds Skinner. Note that the quote-marks around irreversible are the author's, not mine. It is interesting that climate alarmists, when pressed for a definition of irreversible, admit they mean anything that would take more than 30 years to reverse. One should always be careful translating statements by climate scientists, the vernacular of global warming alarmists is not the same as everyday English.

Here is another example: “Thus, if the sum of all positive and negative feedbacks—the climate gain—can vary as a function of climate, with different impacts on different time scales, then the calibration of climate response to radiative forcing becomes extremely challenging, especially when attempting to generalize across the full range of possible climate states, past and future.”


IPCC sensitivity zones and estimates from the distant past.

What this means, simply, is that since scientists have little idea what mechanisms are at work regulating global climate, they have no idea what value of sensitivity should be applied at the present. And because Earth's climate system is always changing, science cannot simply look to the recent past for an answer (all those pesky nonlinearities and “tipping points”). Yet modelers are constantly setting a value for sensitivity and then “validating” their models against climate data from the recent past—a process called backcasting.

Skinner states, that if our goal is to predict the future climate 50 or 100 years out, then “arguably we must do so within a conceptual framework that augments the notion of climate sensitivity as a straightforward linear calibration of climate gain, with the possibility of nonlinear feedbacks and irreversible transitions in the climate system.” Skinner concludes: “The latter ability, or Earth system "resilience," would be viewed as an emergent and evolving property of the climate system, rather than as a constant.” Sensitivity is beginning to look like that old engineer's joke: Fudge's Variable Constant.

When it comes to Earth's climate, new factors are constantly being discovered. A new, and as of yet unidentified chemical substance is involved in driving sulfuric acid formation over forests, making all cloud formation models obsolete. A more direct example of how changing climate can change the climate system's response is the discovery that absorption of CO2 has more than doubled over the past half century. These, and other discoveries indicate that a more general measure of the Earth system's ability to maintain its prevailing state when subject to forcings is required.

As we said in The Resilient Earth, the complexity of Earth's climate system far exceeds current day climate science's abilities to understand. Basing future predictions on CO2 by trying to capture the planet's climate response in a single value called “sensitivity” is a feeble attempt to explain climate change by an immature science. What is needed is more sense about climate sensitivity, for clearly, trying to find a single value to explain climate change is a fool's game.

Be safe, enjoy the interglacial and stay skeptical.

Sensitivity from MIT

Here is how MIT defines, or rather explaines climate sensitivity from an article on their website:

    Specifically, the term is defined as how much the average global surface temperature will increase if there is a doubling of greenhouse gases (expressed as carbon dioxide equivalents) in the air, once the planet has had a chance to settle into a new equilibrium after the increase occurs. In other words, it’s a direct measure of how the Earth’s climate will respond to that doubling.

    That value, according to the most recent IPCC report, is 3 degrees Celsius, with a range of uncertainty from 2 to 4.5 degrees.

    This sensitivity depends primarily on all the different feedback effects, both positive and negative, that either amplify or diminish the greenhouse effect. There are three primary feedback effects — clouds, sea ice and water vapor; these, combined with other feedback effects, produce the greatest uncertainties in predicting the planet’s future climate.

Are you saying that this is not true? Most of the scientists in the world believe this to be the case, are they all wrong?

Just describing something doesn't make it so

The definition of climate sensitivity given in the MIT article is an accurate portrayal of the IPCC definition. The problem is that the definition itself is horribly inexact and the value it tries to quantify is overly simplistic and inaccurate. You should have quoted the next paragraph from the MIT article:

With no feedback effects at all, the change would be just 1 degree Celsius, climate scientists agree. Virtually all of the controversies over climate science hinge on just how strong the various feedbacks may be — and on whether scientists may have failed to account for some of them.

Based on straightforward physics you get 1°C for a doubling of carbon dioxide, all the additional increase that is used to inflate the global warming crisis is supposition—they are guessing, making things up. The article goes on to say that the uncertainties are large and “feedback is what’s driving things.” There are a number of problems with this “definition” of how the climate will respond to a doubling of CO2:

  • Not all of the “feedbacks” are known — if you have followed the postings on this site for the past four years you will know that new feedback mechanisms are discovered almost weekly, some positive and some negative.

  • Known feedbacks are not well understood — the MIT article uses the example of clouds but this also applies to aerosols, solar irradiance (UV in particular), land use changes, etc.

  • The feedbacks are constantly changing — the Earth climate system is not a fixed, static machine that can be easily measured. For every change in the environment there can be a number of changes in the feedback loops in response—not just the feedback itself, but the characteristics of the feedback mechanisms themselves. Ocean and atmospheric circulation patterns change, biologic systems adjust their uptake of carbon, etc.

This is why I find the concept of climate sensitivity so tragically farcical: we do not know all of the internal mechanisms; the ones we do know are not accurately quantifiable; and the ones we do think we understand can change without notice. The whole idea that the response of Earth's climate can be captured in a single, linear relationship based on a single “forcing” (ie. CO2) is ludicrous. Are all the world's scientists wrong? If they think that climate sensitivity, as defined by the IPCC, is meaningful then they are not just wrong but delusional. Worse, they are deluding the public.

Earth Comment

This is the earth comment

How lucid

How lucid! What piercing insight! Do go on.