Climate Models Irreducibly Imprecise

A number of recent papers analyzing the nature of climate models have yielded a stunning result little known outside of mathematical circles—climate models like the ones relied on by the IPCC contain “irreducible imprecision.” According to one researcher, all interesting solutions for atmospheric and oceanic simulation (AOS) models are chaotic, hence almost certainly structurally unstable. Further more, this instability is an intrinsic mathematical property of the models which can not be eliminated. Analysis suggests that models should only be used to study processes and phenomena, not for precise comparisons with nature.

The ability to predict the future state of the Earth climate system, given its present state and the forcings acting upon it, is the holly grail of climate science. What is not fully appreciated by most is that, in the prediction of the evolution of that system, we are severely limited by the fact that we do not know with arbitrary accuracy the evolution equations and the initial conditions of the system. By necessity climate models work with a finite number of equations, from initial data determined with finite resolution from a finite set of observations. These limitations are further exacerbated by the addition of structural instability due to finite mesh discretization errors (the real world isn't divided into boxes 10s or 100s of kilometers on a side; the impact of changing mesh size has been well documented in a number of recent studies).

In a 2007 paper, James C. McWilliams, of the Department of Atmospheric & Oceanic Sciences at UCLA, has termed the impact of the errors in AOS models from the change in the probability density functions (PDFs) in the climate equilibrium compared with the true PDFs from nature as “irreducible imprecision.” The main hypothesis advocated by McWilliams is that structural instability is the primary source of irreducible imprecision for climate change science. In other words, small changes in AOS model parameters or formulation result in significant differences in the longtime PDFs or the phase-space attractor and these can effect climate change projections. Virtually all physical systems have structural instability, according to a paper in PNAS by Andrew J. Majda, Rafail Abramov, and Boris Gershgorin:

Climate change science focuses on predicting the coarse-grained, planetary scale, longtime changes in the climate system due to either changes in external forcing or internal variability, such as the impact of increased carbon dioxide. For several decades, the predictions of climate change science have been carried out with some skill through comprehensive computational, atmospheric, and oceanic simulation (AOS) models that are designed to mimic the complex, physical, and spatio-temporal patterns in nature. Such AOS models, either through lack of resolution due to current computing power or through inadequate observation of nature, necessarily parameterize the impact of many features of the climate system such as clouds, mesoscale and submesoscale ocean eddies, sea ice cover, etc. There are intrinsic errors in the AOS models for the climate system and a central scientific issue is the effect of such model errors on predicting the coarse-grained, large-scale, longtime quantities of interest in climate change science.

What is at issue here is the fundamental behavior of turbulent, chaotic, dynamical systems. To understand the true impact of these statements some background information is needed—such as just what a probability density function is. Such systems have been the subject of study for more than a century, beginning with early work on Brownian motion. In 1827, English botanist Robert Brown noticed that pollen grains suspended in water jiggled about under the lens of the microscope, describing seemingly random zigzag paths. Even pollen grains that had been stored for a century moved in the same way. The puzzle was why the pollen didn't eventually settle to the bottom of the jar. As explained by Desaulx in 1877, the phenomenon is a result of thermal molecular motion in the liquid environment. A suspended particle is constantly and randomly bombarded from all sides by molecules of the liquid.

A number of the equations found in climate models come from studies of fluid flow. Where things become really dicey is when the flow becomes turbulent—chaotic in the mathematical sense. This touches on work by Edward Lorenz in the early 1960s, some of which was discussed in The Resilient Earth. Again, to understand the math presented in these papers some background in fluid flow and chaos theory is needed. There is a fairly accessible paper that presents useful background information by Matthew Carriuolo, “The Lorenz Attractor,Chaos, And Fluid Flow,” available on the web. It was his undergraduate-level thesis at Brown University, done in 2005.

In a smoothly flowing fluid, a laminar flow, it is possible to trace the trajectory of a particle or molecule through the system. Unfortunately, in many, if not most, complex natural systems fluid does not flow smoothly. Instead it exhibits swirling, tumbling patterns of turbulence—chaotic flow. Under these conditions the trajectories followed by individual particles are unpredictable. Two particles that start next to each other may follow wildly different paths through the system under chaotic conditions. Instead of trying to predict particle trajectories exactly scientists turn to a statistical measure of where the particles are likely to be—this is the probability density function. An example of a PDF overlain by an individual particle's trajectory is shown in the figure below, taken from Carriuolo's thesis.

The green region is a representation of the probability density function for the Rossler Attractor, the cyan dotted path is an actual Phase space trajectory. From Carriuolo, 2005.

Brownian motion follows the Langevin equation (a), which can be solved directly using numerical methods such as Monte Carlo simulation. This approach, however, can be quite expensive computationally. The main method of solution is by use of the Fokker-Planck equation (b), which provides a deterministic equation satisfied by the time dependent probability density. Other techniques, such as path integration have also been used, drawing on the analogy between statistical physics and quantum mechanics. For physics fans, the Fokker-Planck equation can be transformed into the Schrödinger equation by rescaling a few variables. Unfortunately, being a partial differential equation the Fokker–Planck equation can be solved analytically only in special cases—generally numerical methods must be used.

What is important in this application is that the Fokker–Planck equation can be used for computing the probability densities of stochastic differential equations. The Fokker–Planck equation describes the time evolution of the PDF of the position of a particle or other parameter observation of interest. It is named after Adriaan Fokker and Max Planck, and was first used for the statistical description of (surprise!) Brownian motion of a particle in a fluid.

In his PNAS paper, “Irreducible imprecision in atmospheric and oceanic simulations,” McWilliams identifies two types of endemic modeling error—sensitive dependence and structural instability. As a result of these errors, “there is a persistent degree of irreproducibility in results among plausibly formulated AOS models. I believe this is best understood as an intrinsic, irreducible level of imprecision in their ability to simulate nature.”

Generic behaviors for chaotic dynamical systems with dependent variables ξ(t) and η(t): (Left) sensitive dependence, small changes in initial or boundary conditions imply limited predictability with (Lyapunov) exponential growth in phase differences, and (Right) structural instability, small changes in model formulation alter the long-time probability distribution function, PDF (i.e., the attractor).

For climate models, McWilliams states, “their solutions are rarely demonstrated to be quantitatively accurate compared with nature.” What's more, “their partial inaccuracies occur even after deliberate tuning of discretionary parameters to force model accuracy in a few particular measures.” McWilliams attributes this to differences between the model's predicted long-term, steady state solution and the steady state conditions of the natural system. The way these differences are determined is by comparing PDFs of the model and the natural environment.

The last item of math-speak that you need to know to understand the McWilliams and Majda et al. papers is “Lyapunov characteristic time.” When you have a system of partial differential equations that meet all the necessary mathematical restrictions discussed above the Lyapunov exponent or Lyapunov characteristic exponent can be computed (there are actually a number of these exponents, a whole spectrum with the number of exponents equal to the number of dimensions of the phase space). The largest exponent characterizes the rate of separation of infinitesimally close trajectories in phase space and can determine the predictability of the system in question. The inverse of the largest Lyapunov exponent is sometimes referred in literature as Lyapunov time. In simple terms, it can provide a time limit on the validity of a model's future predictions. Given that, here is what Majda et al. have to say about the current crop of GCM climate models:

Contemporary climate models are typically characterized by a set of fast “weather” variables that describe small-scale interactions on a short time scale of a few hours, nonlinearly coupled with the large-scale slow “climate” variables.This setup causes the largest Lyapunov exponents and, consequently, the characteristic Lyapunov time to be extremely short and associated with the fast variables, whereas the response of the mean climate state is tied to the decorrelation times of the slow-climate variables. Therefore, it is likely that the typical time of climate response development will be much longer than the Lyapunov characteristic time, and the irreducible imprecision noted above may potentially have a remarkable impact.

If there is any doubt that such imprecision leads to a wide range of variability in model predictions, look at the figure below showing the output of a number of models. It shows their predictions of globally averaged surface air temperature change in response to emissions scenario A2 of the IPCC Special Report on Emission Scenarios. Note that atmospheric CO2 levels are double present concentrations by year 2100.

As can be seen, a large disparity exists among various climate models in their prediction of change in global mean surface air temperature. The predicted temperature rise for 2100 ranges from a low of ~1°C to a high approaching 6°C. Although each climate model has been optimized to reproduce observational means, each model contains slightly different choices of model parameter values as well as different parametrizations of under-resolved physics. This is why I have repeatedly stated that climate modeling is no substitute for real climate science. Sadly, the IPCC's climate scientists have known about their models' weaknesses from the start.

Majda et al. wrote their paper to suggest alternate ways of modeling climate systems. Whether being able to solve more accurately for the long-term climate trend would prove sufficient is an open question. If the best you can do is say it is going to get warmer for a while and then, within say 10,000 years, Earth will start the slow descent into another glacial period most people, politicians and the media in particular, will show little interest. In addition to Majda et al.'s fluctuation dissipation theorem approach other types of model have recently been suggested. Regardless, given today's models, the predictions climate change alarmists base their case on can not be trusted. To quote McWilliams: “Such simulations provide fuller depictions than those provided by deductive mathematical analysis and measurement (because of limitations in technique and instrumental-sampling capability, respectively), albeit with less certainty about their truth.”

Scientists are currently arguing about temperature changes of tenths of degrees per decade or even per century. Given the state of GCM and available computer resources, valid predictions of climate changes of these magnitudes simply cannot be accurately calculated. This is not a mater of opinion, it is a statement of fact based in mathematical analysis of climate models by multiple scholars. To base the future of the world's economy and possibly the course of human civilization on climate model predictions is insanity.

Be safe, enjoy the interglacial and stay skeptical.

You might be interested …

Your post brought to mind this article by Jeffrey A. Glassman, "The Cause of the Earth's Climate Change is the Sun," http://www.rocketscientistsjournal.com/2010/03/sgw.html in the following respect, from the article:

"This model hypothesis that the natural responses of Earth to solar radiation produce a selecting mechanism. The model exploits evidence that the ocean dominates Earth's surface temperature, as it does the atmospheric CO2 concentration, through a set of delays in the accumulation and release of heat caused by three dimensional ocean currents. The ocean thus behaves like a tapped delay line, a well-known filtering device found in other fields, such as electronics and acoustics, to amplify or suppress source variations at certain intervals on the scale of decades to centuries. A search with running trend lines, which are first-order, finite-time filters, produced a family of representations of TSI as might be favored by Earth's natural responses. One of these, the 134-year running trend line, bore a strong resemblance to the complete record of instrumented surface temperature, the signal called S134."

A nicely formatted pdf of this article is here: http://journal.crossfit.com/2010/04/glassman-sgw.tpl#featureArticleTitle

Best Regards,

If the right knowledge base had been consulted

this issue could've been resolved from the start. The first paragraph of this post is saying the same thing I've been saying since I first heard several years ago that climate researchers were using computer models to make predictions. My computer science professors from 20 years ago could've told people what "a number of recent papers" said. They told us about the unreliability of trying to simulate chaotic non-linear systems. Heck, even my professor for freshman economics could've told people this. Economists have used chaotic non-linear models for decades. What I was told years ago was everyone knew they were not good enough for prediction, just for study, though judging from what Alan Greenspan said in the aftermath of the economic crash of 2008, that may have changed. Not that the economic models had gotten any better, but the attitude towards them in the financial community probably did.

What's the point in developing knowledge to try to help humanity when the implementors of systems remain blissfully ignorant? I was telling a climate modeler who believes in catastrophe about this this past summer. I was like Charlie Brown's teacher. All he heard was "Blah blah blah, blabbity-blah." I guess he had no incentive to listen, since I was telling him his job was not as significant (though not totally useless) as he thought it was. I imagine being on the other end of that kind of message, and taking it to heart, would probably be depressing. Best avoid thinking about it, move on, and keep getting one's paycheck.