Climate Models' “Basic Physics” Falls Short of Real Science

Recently, a PR offensive has been mounted by the minions of climate alarmism, attempting to rehabilitate the soiled reputation of climate models. Most everyone by now has heard of the 18+ year pause in global temperature increase, dubbed the “pause” by climate change advocates. This hiatus in global temperature increase, happening in the face of ever rising atmospheric CO2 levels, has caused even the most die hard climate alarmists to doubt the veracity of climate science's digital oracles. The latest phrase being test marketed in the green stooge press is the claim that climate models are just “Basic Physics”, implying that they are in some way scientifically accurate. Nothing could be farther from the truth.

Computer models are used extensively in widely divergent fields of human endeavor: Governments use them to predict economic trends and outbreaks of communicable disease; engineers use them to simulate the safety limits of structures and the aerodynamic properties of new aircraft designs; and models are invaluable in many industries where they greatly reduce the cost of prototyping new gadgets. So if we trust computers to design bridges and airliners why not trust them to predict future climate change? Well, for a number of reasons.

First is that even simple structural models—finite element analysis—are only used to get a general idea of the basic integrity of a design. Engineers always add generous safety margins to ensure the soundness of buildings and other structures. When they don't the result is often collapse, death, and lawsuits. The same is true for designing aircraft, using virtual wind tunnels to model airflow. Even after extensive design work and modeling, the actual plane must be tested and then flown. Often modifications must be made to correct for things missed in the virtual world.

Note that the equations for fluid flow have been known for centuries. The Navier–Stokes equations are used extensively in video games in order to model a wide variety of natural phenomena including simulations of small-scale gaseous fluids, such as fire and smoke.


Incompressible Navier–Stokes equation.

The fundamental equations governing statics and dynamics date back at least as far, though it was only in the 1960s that computers made calculating numerical solutions of networks of partial differential equations possible. Know as the finite element method (FEM), it is a computer based numerical technique for finding approximate solutions to boundary value problems for partial differential equations. It is widely available in Computer Aided Design (CAD) packages and can be applied to a wide range of problems.

It should come as no surprise that General Circulation Models (GCM), the basis for more comprehensive computer climate models, are based on differential equations, as are weather forecasting models and hurricane path prediction models. As we all know, weather forecasts are not very accurate, only giving a general idea of conditions a few days out, and hurricane models generally cannot predict the point of landfall until just before a storm comes ashore. But GCM are different from weather programs even though they use some of the same equations. That is a refrain often repeated by supercilious climate modelers. It is true that climate models also include extra factors like sea ice models and “parametrization” for things like clouds. Unfortunately for them their models are not immune to the laws of computation that make their short term cousins grow more and more inaccurate over time.

The figure above shows how GCM have diverged from reality over the past two decades. The only reason that the models maintained a decent correlation with observed temperatures in the past is because the models were adjusted to match the historical record until recent times. The image above is actually a cleaned up simplification of the situation, because there are many different computer climate models and each time they are run they give different answers based on slight changes in their input parameters and settings. Reality looks more like the plot below.

How, you might ask, can a climate model generate so many different (and wrong) outcomes? If they are “basic physics” they must be based on the same equations, right? And there in lies the first major problem with climate models—Earth's climate is far too complex and on too large a scale for any computer model to make an accurate guess at how it works. Consider for a moment what the models are trying to represent.

Every molecule in Earth's atmosphere interacts with its neighbors, many different types of gas molecules, plus dust particles, ice crystals and other particulates. They also interact with the surface of the planet, solid surfaces made up of rock and dirt and soil. Some areas are covered with vegetation ranging from grass lands to Arctic tundra to forests. Elsewhere they interact with water, the surface of oceans frozen and tropical, pack ice and windblown tempests. The interface between air and Earth's surface is a boundary zone and must be accurately modeled to get heat transfer right. Different surface areas are at different temperatures and can emit gases, primarily H2O, which can change the composition of the air above them. Evaporation absorbs heat and carries it into the atmosphere to be released later when the water condenses into ice crystals, clouds, and precipitation.

Similarly, water in the ocean flows around the planet, rising and sinking according to its density. Driven by Earth's rotation and the wind, diverted by the continents and the topology of the ocean floor, water circulation is as complex as the atmosphere's. Fresh water from the continents, precipitation and melting ice bring changes in salinity, which changes water density affecting circulation. Temperature also changes by radiating heat or absorbing radiation from sunlight, and interacting with the atmosphere.

There are roughly 1.09x1044 molecules in Earth's atmosphere and 4.4x1046 in the oceans, many times more than stars in the Universe. Each one is interacting with its fellows, exchanging kinetic energy. Others are busy absorbing and re-emitting radiation, helping keep Earth at habitable temperatures. So throw in the Sun, changing seasons and the occasional volcanic eruption and it is easy to see why climate science has failed to accurately model Earth's climate—there is no computer in the world capable of calculating more than a crude approximation of the climate machine that surrounds us. The Figure below shows how models roughly hack up our planet in order to make simulating it a tractable problem. The bottom line is that basic physics are not sufficient to solve this problem, so most of the detail is left out.

So, by necessity, the models are inexact representations of the real world at best. But that is not the only reason why they fail. When solving differential equations it is important to correctly set the initial conditions, the state of the climate at the beginning of the simulation. This means you must know at the minimum, the temperature everywhere around the world, in the various layers of atmosphere and in the depths of the sea. The Extent of sea ice and vegetation also varies over time, which changes Earth's albedo, how much of the radiation from the Sun gets reflected from the surface of the planet.

What's more, all the initial data are not known exactly—all such measurements come with significant uncertainty bounds. Even the incredibly meaningless global temperature figures put out by various government agencies has uncertainty. Here is an explanation from NOAA's National Climate Data Center.

Evaluating the temperature of the entire planet has an inherent level of uncertainty. Because of this, NCDC provides values that describe the range of this uncertainty, or simply “range”, of each month's, season's or year's global temperature anomaly. These values are provided as plus/minus values. For example, a month's temperature anomaly may be reported as “0.54°C above the 20th Century average, plus or minus 0.08°C.” This may be written in shorthand as “+0.54°C +/- 0.08°C.” Scientists, statisticians and mathematicians have several terms for this concept, such as “precision”, “margin of error” or “confidence interval”.

That explanation is a bit misleading since the “global” temperature is a composite of many other temperature readings. All measurements have some uncertainty, or error, associated with them. Historical temperature records are very spotty indeed. A number of analyses of temperature data have appeared in the literature. The conclusion from a paper by Roger Pielke Sr., et al. is representative.

A major conclusion of our analysis is that there are large uncertainties associated with the surface temperature trends from the poorly sited stations. Moreover, rather than providing additional independent information, the use of the data from poorly sited stations provides a false sense of confidence in the robustness of the surface temperature trend assessments.

Indeed, the illustration below, taken from “Uncertainty In The Global Average Surface Air Temperature Index: A Representative Lower Limit,” by Patrick Frank, shows significantly larger error bounds than that claimed by the NCDC above. The plotted data are from the global surface air temperature anomaly series through 2009, as updated on 18 February 2010, from NASA GISS. The grey error bars show the annual anomaly lower-limit uncertainty of ±0.46 C.

This is the second factor limiting the accuracy of climate models: climate scientists do not have data to set the initial conditions of their models and the data that they do have contains significant sensor errors. But that is just the beginning of what happens to uncertain data in a computer model simulation.

The third factor that helps render climate models wildly inaccurate is computational error. Most modelers are not computer scientists, for if they were they would not be so sanguine regarding the prognostications of the binary friends. There are many types of error that plague computer calculations. Of these I will mention three: representational error, roundoff error and propagated error. As presented in The Resilient Earth:

Even if the data used to feed a model was totally accurate, error would still arise. This is because of the nature of computers themselves. Computers represent real numbers by approximations called floating-point numbers. In nature, there are no artificial restrictions on the values of quantities but in computers, a value is represented by a limited number of digits. This causes two types of error; representational error and roundoff error.

Representational error can be readily demonstrated with a pocket calculator. The value 1/3 cannot be accurately represented by the finite number of digits available to a calculator. Entering a 1 and dividing by 3 will yield a result of 0.33333... to some number of digits. This is because there is no exact representation for 1/3 using the decimal, or base 10, number system. This problem is not unique to base 10, all number systems have representational problems.

The other type of error, roundoff, is familiar from daily life in the form of sales tax. If a locale has a 7% sales tax and a purchase totals $0.50 the added tax should be $0.035 but, since currency isn't available for a ½ cent, the tax added is only $0.03 dropping the lowest digit. The tax value was truncated or rounded down in order to fit the available number of digits. In computers, multiplication and division often result in more digits than can be maintained by the machine's limited representation. Arithmetic operations can be ordered to minimize the error introduced due to roundoff, but some error will still creep into the calculations.

After the uncertainties present in data are combined with errors introduced by representation and roundoff, a third type of computational error arises—propagated error. What this means is that errors present in values are passed on through calculations, propagated into the resulting values. In this book, values have often been presented as a number followed by another number prefaced with the symbol “±” which is read “plus or minus.” This notation is used to express uncertainty or error present in a value: 10±2 represents a range of values from 8 to 12, centered on a value of 10.

When numbers containing error ranges are used in calculations, rules exist that describe how the error they contain is passed along. The simplest rule is for addition: the two error ranges are added to yield the error range for the result. For example, adding 10±2 to 8±3 gives a result of 18±5. There are other, more complicated rules for multiplication and division, but the concept is the same. When dealing with complicated equations and functions, like sines and cosines, how error propagates is determined using partial differential equations. The mathematics of error propagation rapidly becomes very complex and, as seen in the MD example related above, errors can build up until the model is overwhelmed.

To demonstrate how these sources of computational error can overwhelm the results of even simple calculations, consider the following example. This example cannot be explained without using equations, but understanding them is not essential—only understanding the final result is important. Having said that, let n be a positive integer and let x(n)=1/n. Instructing a computer to compute x=(n+1)*x-1 should not change the value of x. Source code for this program, written in the Perl language, is given in Text 1.

This program will generate a table of numbers, with the number n varying from 1 to 10. The value the function x is calculated for each value of n. Then the equation x=(n+1)*x-1is computed first ten and then thirty times. Each time the equation is computed, the newly calculated value of x replaces the old value, so that the new value of x from each step is used as the input value of x for the next step. This process is called iteration. If the computer's internal representation of x is exact, the values produced by the second equation should not change.

The output from running the example program is shown in Text 2. The value of n is given in the first column and the initial value of x is in the second column. The values of x after 10 and 30 iterations are given in columns three and four, respectively. Notice that for some values of n the computed values of x do remain unchanged, but for others the results diverge—slightly after 10 iterations, then wildly after 30.

The reason that the x values for 1, 2, 4, and 8 do not diverge is because computers use binary arithmetic, representing numbers in base 2. The rows of results that did not diverge began with numbers that are integer powers of 2. These numbers are represented inside the computer exactly, and the iterative computation does not change the resulting values of x. The other numbers cannot be represented exactly so the computation blows up. The same thing happens in any computer program that perform iterative computations—like climate simulation models.

If all of these complicating factors—data errors, incomplete and erroneous models, non-linear model response, roundoff and representational error, and error propagation—are not daunting enough, different computer hardware can introduce different amounts of error for different arithmetic operations. This means that running a model on a Sun computer can yield different results than an Intel based computer or an SGI computer. To say the least, computer modeling is not an exact science.

What does all this mean for climate modeling? In an interview last year with the newspaper Der Spiegel, the well-known German climatologist Hans von Storch said “If things continue as they have been, in five years, at the latest, we will need to acknowledge that something is fundamentally wrong with our climate models.”

Along similar lines, Climatologist Judith Curry observed “If the 20-year threshold is reached for the pause, this will lead inescapably to the conclusion that the climate model sensitivity to CO2 is too large.”

Ross McKitrick, in an article in the Financial Times, puts the pause and its implications for climate modeling into perspective:

We will reach the 20 year mark with no trend in the satellite data at the end of 2015, and in the surface data at the end of 2017. With CO2 levels continuing to rise, it will at that point be impossible to reconcile climate models with reality and the mainstream consensus on how the climate system responds to greenhouse gases will begin breaking apart.

Most of the case for climate alarmism is based on the predictions of climate models. As we have seen, they are not good science, being poor representations of the natural climate system at best. They are plagued by inaccurate input data and doomed by error propagation to diverged wildly even from the make believe worlds they represent. Sadly, even ardent global warming advocates know the models are fatally flawed.

When asked in testimony before the US Congress, EPA Administrator Gina McCarthy could not answer this question: “If you take the average of the models predicting how fast the temperature would increase, is the temperature in fact increasing less than that or more than that?”

Her response was, “I cannot answer that question specifically.” In other words, a woman who has declared CO2, a molecule essential to life on Earth, “carbon pollution” does not trust the models the whole climate catastrophe charade is based on. If the head of the EPA does not believe the models should you?

Newton's Laws of motion fail when faced with objects traveling near the speed of light, or when taken to very small scales. These simple laws need the added complexity of Einstein's Theory of Relativity and the theory of Quantum Mechanics to do the job properly. So it is that the “basic physics” of climate models are found wanting. They simply are not sufficient to the task at hand.

It's not the models' fault, they were put to uses they never should have been, asked to generate answers that remain beyond their capabilities. The people to blame here are the climate scientists who sized upon modeling as an easy way to figure out one of the most complex problems ever to confront humanity. Instead gaining knowledge they have created fiction. Instead of uncovering truth they have lied to the public. Instead of saving the planet they have misleading the world and themselves.

Be safe, enjoy the interglacial and stay skeptical.

More model failure

Max Planck Institute Confirms Warming Pause! …Scrambles To Explain Widespread Model Failure - See more at: http://notrickszone.com/2015/06/16/max-planck-institute-confirms-warming...

Western US Warming Data Found Wanting

From the "how crappy is your data?" department comes this report. In a recent study, University of Montana and Montana Climate Office researcher Jared Oyler found that while the western U.S. has warmed, recently observed warming in the mountains of the western U.S. likely is not as large as previously supposed. His results, published Jan. 9 in the journal Geophysical Research Letters, show that sensor changes have significantly biased temperature observations from the Snowpack Telemetry (SNOTEL) station network. From the paper's abstract:

Here we critically evaluate this network’s temperature observations and show that extreme warming observed at higher elevations is the result of systematic artifacts and not climatic conditions. With artifacts removed, the network’s 1991–2012 minimum temperature trend decreases from +1.16°C decade−1 to +0.106°C decade−1 and is statistically indistinguishable from lower elevation trends. Moreover, longer-term widely used gridded climate products propagate the spurious temperature trend, thereby amplifying 1981–2012 western U.S. elevation-dependent warming by +217 to +562%. In the context of a warming climate, this artificial amplification of mountain climate trends has likely compromised our ability to accurately attribute climate change impacts across the mountainous western U.S.

More here.

On the Incident Solar Radiation in CMIP5 Models

A new paper published in Geophysical Research Letters finds astonishingly large errors in the most widely used ‘state of the art’ climate models due to incorrect calculation of solar radiation and the solar zenith angle at the top of the atmosphere. To quote the authors:

Annual incident solar radiation at the top of atmosphere (TOA) should be independent of longitudes. However, in many Coupled Model Intercomparison Project phase 5 (CMIP5) models, we find that the incident radiation exhibited zonal oscillations, with up to 30 W/m2 of spurious variations. This feature can affect the interpretation of regional climate and diurnal variation of CMIP5 results.

The paper adds to hundreds of others demonstrating major errors of basic physics inherent in the so-called ‘state of the art’ climate models. Analysis on WUWT here. Paper abstract here.

Climate Models

The key word is model.
Change the input and change the result.
As long you are forecasting into the future you can make the model show whatever you want it to show.
As far as earth's climate is concerned, the sun will dictate our climate.
Currently, the sun has moved into a dormancy known as the Maunder Minimum.
It will be global cooler for the next few years, CO2 levels not withstanding.
The models are not even close to the reality.

Actual temperatures

Doug,

What temperature is used to go into these models? The high for the day? The average of the high & low for the day? I live in the mountains where the sun does not come up almost a hour after "sunrise" and goes down before "sundown". An average would not give the true average unless it was the average of each of the seconds in the 24-hour day. Do the satellites record that way or some other? I'll bet few Earth stations are equipped to create an average of each hour, let alone each second. So which is it? any or none?

Mike