Modeling Flaws Plague Climate Science

Despite a decade and a half without temperature rise, climate scientists still stubbornly stick to their predictions of steadily increasing global temperatures. These predictions are all based on GCM, computer programs that model the circulation of Earth's atmosphere and oceans and a myriad of other factors in an attempt to simulate our planet's climate system. The problem is, the computer models are severely flawed, flawed at such a fundamental level that two climate modelers have called for a reassessment of all computer models currently in use. Sadly, a number of the flaws they point out have been known to scientists for decades, yet mainstream climate science continues to rely on these broken models, hoping to get lucky with predictions made for the wrong reasons.

Most people have never heard of Joseph Smagorinsky, an American meteorologist and the first director of the National Oceanic and Atmospheric Administration's Geophysical Fluid Dynamics Laboratory. A half century ago, the late Dr. Smagorinsky was one of the first climate scientists to attempt to model the Earth system using computers, creating the first general circulation model or GCM. In his doctoral dissertation, Smagorinsky developed a new theory of how heat sources and sinks in midlatitudes, created by the thermal contrast between land and oceans, disturbed the path of the jet stream. In 1955, at the instigation of computer visionary John von Neumann, the U.S. Weather Bureau created a General Circulation Research Section under Smagorinsky's direction.

Now, a recent paper published in the journal Science, by Bjorn Stevens and Sandrine Bony, of the Max Planck Institute for Meteorology and Laboratoire de Météorologie Dynamique–Institut Pierre Simon Laplace, respectively, examines the progress in computer climate modeling. In the article, titled “What Are Climate Models Missing?,” the author's return to basics to ask what is wrong with today's complex and cumbersome numerical models, and their judgment is harsh. Their findings are captured in the article's abstract:

Fifty years ago, Joseph Smagorinsky published a landmark paper describing numerical experiments using the primitive equations (a set of fluid equations that describe global atmospheric flows). In so doing, he introduced what later became known as a General Circulation Model (GCM). GCMs have come to provide a compelling framework for coupling the atmospheric circulation to a great variety of processes. Although early GCMs could only consider a small subset of these processes, it was widely appreciated that a more comprehensive treatment was necessary to adequately represent the drivers of the circulation. But how comprehensive this treatment must be was unclear and, as Smagorinsky realized, could only be determined through numerical experimentation. These types of experiments have since shown that an adequate description of basic processes like cloud formation, moist convection, and mixing is what climate models miss most.

The authors return to Smagorinsky's fundamental question: what level of process detail is necessary to understand the general circulation? The paper's conclusion: “There is now ample evidence that an inadequate representation of clouds and moist convection, or more generally the coupling between atmospheric water and circulation, is the main limitation in current representations of the climate system.” This conclusion comes as little surprise to most climate scientists.

In GCM the most fundamental interaction is that between the atmosphere and the ocean, that is where the action is in terms of energy transfer. Part and parcel to this energy transfer is the behavior of water, particularly as water vapor. Evaporation drives convection and cloud formation, which impacts radiative transfer and precipitation patterns. If that part of a climate model gets nature wrong then all later additions amount to dressing up a corpse.

To demonstrate that this limitation constitutes a major roadblock to progress in climate science the authors performed a number of simple numerical experiments. Using idealized simulations of a waterworld that “neglect complex interactions among land surface, cryosphere, biosphere, and aerosol and chemical processes,” they show that key uncertainties—those associated with the response of clouds and precipitation to global warming—are as large as they are in current comprehensive Earth System Models. The results of several of these simulations are shown in the figure below.


Output from simplified circulation models.

Shown are changes in the radiative effects of clouds and in precipitation accompanying a uniform warming (4°C) predicted by four models from Phase 5 of the Coupled Model Intercomparison Project (CMIP5) for a water planet with prescribed surface temperatures. As can be seen, the response patterns of clouds and precipitation to warming vary dramatically from model to model. The question arises, which one to believe? The answer is that none of them get the answers correct.

Stevens and Bony go on to talk about key uncertainties that cripple modeling accuracy. They point to later efforts to improve models by increasing their complexity as obfuscating the underlying reasons for their failure. In their own words:

The increase in complexity has greatly expanded the scope of questions to which GCMs can be applied. Yet, it has had relatively little impact on key uncertainties that emerged in early studies with less comprehensive models. These uncertainties include the equilibrium climate sensitivity (that is, the global warming associated with a doubling of atmospheric carbon dioxide), arctic amplification of temperature changes, and regional precipitation responses. Rather than reducing biases stemming from an inadequate representation of basic processes, additional complexity has multiplied the ways in which these biases introduce uncertainties in climate simulations.

In other words, all the recent efforts to fix models by adding more and more factors amounts to little more than slapping band-aids on an infected wound. In fact, they make matters worse. Yet still the modelers churn out graphs showing rising temperatures with 90% confidence bounds and the scientifically ignorant and mathematically naive accept their predictions as fact. The truth is that climate modelers have absolutely no basis in reality to make such confidence claims.

In the excellent and approachable book “The Signal and the Noise: Why So Many Predictions Fail-but Some Don't,” statistician Nate Silver performs an interesting analysis on many of the predictions we encounter in our modern lives. Silver notes that when the Weather Service says there is a 1 in 10 chance of rain, it really does rain about 10% of the time. On any particular day random chance may trip up the forecaster, blame it on capricious nature. Still, such predictions prove valid over the long run. This is because they have many decades of historical data that support their predictive models. In other cases this is not so.

This explanation becomes less credible, however, when the forecaster does not have a history of successful predictions and when the magnitude of his error is larger. In these cases, it is much more likely that the fault lies with the forecaster's model of the world and not with the world itself.

So far, GCM have little history to fall back on and during much of that their predictions have been false. Could this be because, as asserted by Stevens and Bony, the models are wrong in their fundamental assumptions? Or could it be as others claim, if we can just make the models complicated enough to catch all the nuances of nature they will generate the correct forecasts? No one really knows, but we do know this—the current crop of models are not accurate and should not be treated as scientifically valid.

Be safe, enjoy the interglacial and stay skeptical.