Can You Vague That Up For Me?

Acknowledging that today's supercomputers lack the computational power to successfully model Earth's climate system, a climate modeler is suggesting that climate models would benefit from running on computers whose calculations are less exact. “In designing the next generation of supercomputers, we must embrace inexactness if that allows a more efficient use of energy and thereby increases the accuracy and reliability of our simulations,” says Tim Palmer, a Royal Society research professor of climate physics and co-director of the Oxford Martin Programme on Modelling and Predicting Climate at the University of Oxford, UK. This is nothing more than grasping for excuses to explain the dismal performance of the current crop of climate model simulations. There is an old saying: a craftsman never blames his tools for a bad result. Evidently climate modelers are not even close to being craftsmen.

Anyone who has been paying attention to the climate change follies for the past several decades could not help but notice that climate models suck. The computerized monstrosities are seemingly incapable of coming up with a temperature forecast that comes anywhere close to reality. Billions of dollars have been spent on developing these software chimeras, initially based on a combination of atmospheric and ocean circulation models. Since their inception, climate modelers have added more factors to the models like a student stuffing dirty clothes into a bag before returning home on holiday. As can be seen from the graph below, this has all been to no avail.

The frustration among those who work on such models seems to be running high these days and one such modeling maven, Dr. Tim Palmer, a physicist at the University of Oxford, has had a brainstorm – he wants to make computers less exact. “We must move beyond the idea of a computer as a fast but otherwise traditional 'Turing machine', churning through calculations bit by bit in a sequential, precise and reproducible manner,” he writes in a recent article in the journal Nature, “Modelling: Build imprecise supercomputers.” Dr. Palmer seems to have forgotten that people use computers to crunch numbers in a precise and reproducible manner primarily because humans can't do so. No matter, he seems to think what climate models need is a good dose of imprecision.

In particular, we should question whether all scientific computations need to be performed deterministically — that is, always producing the same output given the same input — and with the same high level of precision. I argue that for many applications they do not.

Energy-efficient hybrid supercomputers with a range of processor accuracies need to be developed. These would combine conventional energy-intensive processors with low-energy, non-deterministic processors, able to analyse data at variable levels of precision. The demand for such machines could be substantial, across diverse sectors of the scientific community.

“it is time for researchers to reconsider the basic concept of the computer,” he gushes. I hate to tell the doctor this, but imprecision leads to error propagation and error propagation leads to divergence in models. This is like saying “We are getting really crappy answers from our computer, I know, let's make it less precise! That will fix it!” Wrong.

As noted in the article, computer climate models predict Earth's future climate by solving partial differential equations (PDEs) for fluid flow in the atmosphere and oceans. In mathematical terms, a PDE is any equation involving a function of more than one independent variable and at least one partial derivative of that function. Given a function u = u(x,y), then a PDE in u is classified as linear if all of the terms involving u and any of its derivatives can be expressed as a linear combination in which the coefficients of the terms are independent of u. In other words, In a linear PDE, the coefficients can depend at most on the independent variables. Throw in some other terms or complications and the equation is called nonlinear. For example, the first equation below is linear, the second nonlinear:

Mathematical technicalities aside, the thing to note is that nonlinear equations are a real pain. While linear PDEs can often be solved exactly using techniques such as separation of variables, superposition, Fourier series, and Laplace, Fourier, or other integral transforms, not so with nonlinear PDEs. They generally require a numerical approach, and so it is with the equations in climate models. An overview of PDEs and how they are simulated by difference equations can be found in “A Review of Numerical Methods for Nonlinear Partial Differential Equations.”

Suffice it to say that the less accurate your computer's calculations the faster your answers diverge from reality. In a 3D simulation, like a climate model, this is dependent on not just the computer's internal representation of numbers but on the size of the grid that the world is chopped up into. The larger the chunk the less accurate the simulation, the smaller the chunk the more calculations needed for each time step. It's a tradeoff between resolution and speed. More simply put, with a big grid you miss the small things, like mountains and clouds, allowing you to more quickly compute the wrong answer.

Current climate simulators – typically with grid cells of 100 kilometers in width – can resolve large weather systems typical of mid-latitudes, but not individual clouds. Yet it has been discovered that modeling cloud systems accurately is crucial for reliable calculation of global temperature in the future (at least that is what modelers are currently blaming their lack of success on). Even then clouds on scales smaller than a grid cell will still have to be approximated, or parametrized, using simplified equations. “Errors introduced by such parametrizations proliferate and infect calculations on larger scales,” Palmer laments.

Since it is estimated that supercomputers won't become fast enough to get remotely accurate simulation results for another decade, it is unsurprising that modelers are casting about for something to do in the meantime. Palmer has hit upon the idea that dispensing with the need for all that pesky accuracy would make his job easier. To this end he envisions a new type of supercomputer:

Like current ones, such a machine would be massively parallel, comprising many millions of individual processing units. A fraction of these would enable conventional, energy-intensive and deterministic high-precision computation. Unlike conventional computers, the remaining processors would be designed to take on low-energy probabilistic computation with lower-precision arithmetic, the degree of imprecision and inexactness being variable.

Such a computer would be able to do a better job of nonlinear simulations and use less power to boot, the author claims. There are a few wrinkles in Dr. Palmer's scheme, however.

The first is that what he is calling noise has another name – error, and error is the bane of all computation. When he says he wishes to reduce the accuracy of the calculations in some areas he is simply asking the hardware to do something the model writers could do today, abet with a massive rewrite of their code. This is akin to not liking the balance in your bank account and deciding to keep less accurate records in an attempt to solve the problem.

Second, his goal is to make the simulation results non-deterministic, which is to say, not reproducible. How do you know when you've gotten the magic right answer when the computation yields a different result each time? Considering that reproducibility is one of the founding principles of experimental science this idea is the height of lunacy.

Today, because the results of individual models and parameter sets are so undependable, climate modelers have adopted the convention of forming “ensembles.” These are collections of many model runs, averaged together in the vain hope that adding up several hundred wrong results will some how yield a correct one (it doesn't).

I know from experience that building a custom computer, which is what such a variable accuracy massively parallel computer would be, takes four or five years from conception to realization. In that amount of time the next two generations of “normal” computer hardware will have been delivered. Palmer frets that coming supercomputers will use entirely too much power – rest assured the engineers are working on that. Before this pipe-dream could be realized the world will have moved on.

The bottom line on this nut-ball idea is that it is a ridiculous suggestion. This is simply an expression of exasperation by a researcher who finds his tools lacking. Palmer wishes for a magic imprecise, probabilistic miracle machine that would yield the accurate results that have eluded climate modelers by being less accurate. Things are evidently getting desperate out there in climate alarmist cuckoo land.

Be safe, enjoy the interglacial and stay accurate.

Weather != climate

Couple points:

"The first is that what he is calling noise has another name – error, and error is the bane of all computation. When he says he wishes to reduce the accuracy of the calculations in some areas he is simply asking the hardware to do something the model writers could do today, abet with a massive rewrite of their code. This is akin to not liking the balance in your bank account and deciding to keep less accurate records in an attempt to solve the problem."

Much of the hardware that Prof. Palmer is advocating for doesn't exist yet. At least, not off the shelf anyway.

"Second, his goal is to make the simulation results non-deterministic, which is to say, not reproducible. How do you know when you've gotten the magic right answer when the computation yields a different result each time? Considering that reproducibility is one of the founding principles of experimental science this idea is the height of lunacy."

You're talking about a different kind of reproducibility to Prof. Palmer. Bit-reproducibility between climate models is simply not possible, because different models have different resolutions, choice of parameters etc. Even for a single model, you might get different results with the same initial conditions, if it uses random numbers some place in the code. What matters is reproducibility of climate statistics. Prof. Palmer is suggesting that less precise arithmetic will allow higher resolution models that will, perhaps unintuitively, actually improve the representation of climate statistics through better resolution of clouds etc.

"Today, because the results of individual models and parameter sets are so undependable, climate modelers have adopted the convention of forming “ensembles.” These are collections of many model runs, averaged together in the vain hope that adding up several hundred wrong results will some how yield a correct one (it doesn't)."

The primary purpose of ensembles is to provide a measure of the uncertainty of a forecast. If 900 out of 1000 model simulations say there will be a tropical cyclone over Florida in 4 days time, there will probably be one. That the ensemble average is sometimes more accurate than a single simulation is a side effect.

Basically, Palmer is actually admitting the great uncertainty surrounding the science, and suggesting that we don't need all this precision - it's simply wasted. After all, if you don't know the value of some atmospheric parameter to more than 2 decimal places, why store it in a variable with 12?

Bloomberg News Develops Mann Hockey Stick II

An example of blatant dishonesty in using graphics

I commend you Doug for coming up with another great post just before the Paris climate talks.

The link below is to a graphics report by Bloomberg Business News sent to all CEOs claiming proof that should quell all doubters.

http://www.bloomberg.com/graphics/2015-whats-warming-the-world/?utm_sour...

In 2003, Professor Weaver of the University of Victoria climatology department chaired a quasi public meeting in Ottawa presenting the results of 24 model projections from around the world. He was the Canadian rep on the UN IPCC (Intergoverment Panel on Climate Change). The graphs were all over the board like in Doug's illustration above. He said he discarded 8 of the outliers. I asked, " On what basis can you discard the 8 outliers. They are supposed to be derived from a deterministic analysis. How do you know that the one showing the least rise in temperature is not he correct one?" It just so happens that it turned out to be the most accurate, but still overstating the temperature rise.

In response to Bloomberg’s assessment that less climatic stations is better, I reference

Dr Judith Curry, mentioned many times in this blog, who presented the following report to the Congress science committee. Dr. Curry is an esteemed climatologist who is respected throughout the world. She is an objective scientist who is not aligned to to any particular group, and thoroughly reviews the reliability of any climate data used in analysis. The graphs presented using data from a number of organizations illustrate an hiatus in global warming during the last 21 years even with increasing CO2 emissions. She does not like the moves made by NOAA to eliminate climatic ground stations, especially any coverage of the arctic regions. A number of climatologists are concluding that the satellite sounding temperature records are the most reliable.

Many scientist signed a petition lead by this blog for Obamma to name Dr. Curry as the US nominee to replace the discredited chair of the UN IPCC (Intergovernment Panel on Climate Change). Obamma, instead, nominated an environmentalist

http://judithcurry.com/2015/11/06/hiatus-controversy-show-me-the-data/#m...

Don Farley Gatineau, Quebec, Canada

On Navier-Stokes and initial Conditions

The CMIP effort clearly relies on 2 elements which seem to be conveniently swept under the carpet:

1. The Navier-Stokes equations which describe the thermodynamics across the boundary layer (i.e. our atmosphere) can only be approximated using numerical methods - they cannot be 'solved' as they are non-linear PDE's as you describe - and the accuracy of the approximation is VERY dependent upon both the grid cell size and the time steps used for the calculation. The Global Circulation models are 8 orders of magnitude out in terms of grid cell size and another several orders of magnitude out in terms of time step when compared to common engineering use of CFD to correctly approximate this fluid dynamics and thermodynamic problem.

2. Even with correct scale of grid cell and time step the climate is a chaotic and close coupled non-linear system and thereby projections into the future are wholly dependent upon the initial conditions (IC) of the system - which we do not have. A single change to an IC state in one grid cell can result in a completely different outcome as integration errors drift and interact with the system dynamics over time. In advanced engineering the simulation sensitivity to these IC is closely monitored and used to interpret the resulting calculation - it would inspire confidence if the CMIP effort were to publish the same.

Thank you, Doug

Doug

Thank you for confirming for me what garbage this notion is. I am nowhere near qualified to assess the particulars of something like this (higher maths, computer modeling, etc.) but this really set off alarm bells for me on a common sense basis:

"In particular, we should question whether all scientific computations need to be performed deterministically — that is, always producing the same output given the same input — and with the same high level of precision."

What a load of crap. It's like some post-modern philosophy that is just absurd.

So it's good to hear from someone qualified to assess this on a technical basis. So often we see this kind of crap from so-called experts who just ramble off a bunch of nonsense using academic jargon meant to buffalo the public who just shrug and say, "well, it must be right, the scientist said so."

Computer Models

The super computers used for climate modelling ... they're super expensive... they're complex... they make 1,000 billion calculations a second ... they have the carbon footprint the size of a small town.

The models solve the equations of fluid dynamics, and they do a very good job of describing the fluid motions of the atmosphere and the oceans. They do a very poor job of describing the clouds, the dust, the chemistry, and the biology of fields and farms and forests. They do not begin to describe the real world that we live in.

In short, climate models are prone to the well known problem of "garbage in garbage out".

Garbage models

It doesn't matter that the input is garbage when the models themselves are garbage.

Garbage Models

Yes ... quite right that the models are garbage. Imagine what it says about all those people who actually believe in these garbage models!!!!