Extinction, Climate Change & Modeling Mayhem

Climate and environmental scientists have become dependent on computer models in recent decades. The scientific literature and the popular press are filled with strident warnings of impending natural disasters, all predicated on the output of computer programs. The IPCC has solemnly predicted that climate change will drive thousands of species to extinction if anthropogenic global warming is not reined in. The coprophagous press has uncritically swallowed these computer generated droppings, reporting conjecture as fact and possibilities as certainties. Even though the climate change faithful continue to blindly believe the IPCC predictions, at least some researchers are aware of the glaring flaws in their computer models.

In a perspective article in the November 6, 2009, issue of the journal Science, Kathy J. Willis and Shonil A. Bhagwat, both from the Oxford University Centre for the Environment, provide a look at what were termed some “novel conclusions” drawn from several recent modeling studies. Novel here is obfuscated science speak for “not inline with the consensus view.” The studies in question were all attempting to address the impact of climate change on biodiversity—the number and variety of species Earth's environment can sustain. Here is how the authors described their report:

Over the past decade, several models have been developed to predict the impact of climate change on biodiversity. Results from these models have suggested some alarming consequences of climate change for biodiversity, predicting, for example, that in the next century many plants and animals will go extinct and there could be a large-scale dieback of tropical rainforests. However, caution may be required in interpreting results from these models, not least because their coarse spatial scales fail to capture topography or "microclimatic buffering" and they often do not consider the full acclimation capacity of plants and animals. Several recent studies indicate that taking these factors into consideration can seriously alter the model predictions.

M. Luoto and R. K. Heikkinen, in their study of the predictive accuracy of bioclimatic envelope models, concluded that the topographic scale incorporated into a model's calculations had a significant impact on the model's accuracy. The study, “Disregarding topographical heterogeneity biases species turnover assessments based on bioclimatic models,” assessed models based on the relation between current climate variables and present-day species distributions. Specifically, models intended to predict the future distribution of 100 European butterfly species. They found that a model that included climate, along with the range of elevations and other topographical variations, predicted only half of the species losses in mountainous areas for the period from 2051 to 2080 when compared with a climate-only model. The inclusion of elevation range as a factor increased the predictive accuracy for 86 of the 100 species.


The European peacock butterfly remains unaffected by climate change.

In another study, “Climate change and plant distribution: local models predict high-elevation persistence,” C. F. Randin et al. assessed the influence of spatial scale on predictions of habitat loss by species distribution models (SDM). Their bioclimatic model attempted to predict the survival of alpine plant species in the Swiss Alps. When the model was run using 16 km by 16 km (10 mile by 10 mile) grid cells the model predicted a loss of all suitable habitats during the 21st century. When they changed the model's grid to a much finer 25 m by 25 m (80 ft by 80 ft) cell size the same model predicted persistence of suitable habitats for up to 100% of the plant species. The authors attributed these differences to the failure of the coarser spatial-scale model to capture local topographic diversity, as well as the complexity of spatial patterns in climate driven by topography.

These two studies suggest that habitat heterogeneity resulting from topographic diversity may be essential for species survival in a changing climate, but that is not the observation I find to be most important here. What I find revealing is that structural changes to a computational model can have such dramatic impact on the model's results. Note that general circulation models (GCM) operate on scales of tens or even hundreds of kilometers, leading most modelers to admit that they are not very good at predicting things like clouds, precipitation or land cover changes (for a more detailed discussion of GCM programs see “Why Climate Modeling Is Not Climate Science”). Why not run all models with much finer grid scales if that would improve their predictive performance? The main problem is a lack of computer power.


IBM p690 cluster at the John von Neumann Institute for Computing.

GCM already run on some of the world's largest super computers. Reducing a model's grid size from 100 km to 10 km results in a 100 fold increase in the number of calculations needed for a single time step, too large for even the most powerful super computers. Reducing the grid to 10 m increases the computational burden by a staggering 100,000,000 times. There will be no computer hardware capable of supporting models of this scale for decades. It seems we are stuck with wonky models for the foreseeable future.

Missing factors, often ignored in climate models to help reduce code complexity and the number of necessary calculations, can also significantly alter a model's results. Highly variable, often contradictory predictions have been obtained when modeling tropical ecosystems. Many studies have indicated that increased atmospheric CO2 affects photosynthesis rates and enhances net primary productivity, yet previous climate-vegetation simulations have not take this into account. A study examining the “plant food effect” of carbon dioxide was reported in an article, “Exploring the range of climate biome projections for tropical South America: The role of CO2 fertilization and seasonality,” in the July 3, 2009, issue of Global Biogeochemical Cycles. D. M. Lapola et al. developed a new model for tropical South America that included the effects of elevated CO2 levels on vegetation. Contrary to the consensus view—that global warming will damage the world's rainforests—they found that fertilization effects overwhelm the negative impacts arising from rising temperature.


Verdant rainforests flourish with rising carbon dioxide levels.

Lapola et al. drove their new model with many different climate scenarios predicting conditions at the end of the 21st century, scenarios generated by 14 coupled ocean-atmosphere GCM from the IPCC Fourth Assessment report. One of the widely reported side effects of global warming is supposed to be the decimation of the world's tropical rainforests. Rather than the large-scale die-off of tropical plant life predicted previously, tropical rainforests remained the same or became wetter and even more productive in the new, more detailed model. Again, from predicted destruction to even more verdant growth—yet we are asked to believe that the old models are accurate.

Not all climate science researchers are blind to the problems and limitations of their computer software playthings. Admitting this in public is another matter, however. As reported in the Washington Times, Kevin E. Trenberth, head of the Climate Analysis Section at the National Center for Atmospheric Research and a prominent man-made-global-warming advocate, in a private email wrote: “The fact is we can't account for the lack of warming at the moment and it is a travesty that we can't.”

While we are on the subject of the failings of climate modeling and the unsupportable claims made by the IPCC and its associates, I can not help but mention the revelations brought forth by the leaked “climategate” emails. As reported by Marc Morano, on the Climate Depot website, these highly disturbing emails show how Dr Philip Jones and his colleagues at the University of East Anglia's Climatic Research Unit (CRU) have for years been discussing devious tactics to bias climate data and avoid releasing the raw data to outsiders under freedom of information laws.

The senders and recipients of the leaked CRU emails constitute a list of the IPCC's scientific elite, such as Dr Michael “hockey stick” Mann who's graph turned climate history on its head 10 years ago by eliminating the Medieval Warm Period and the Little Ice Age; Dr Jones and his CRU colleague Keith Briffa; Ben Santer, responsible for a highly controversial rewriting of key passages in the IPCC's 1995 report; Kevin Trenberth, who pushed the IPCC into scaremongering over hurricane activity; and Gavin Schmidt, right-hand man to Dr James Hansen, whose own GISS record of surface temperature data is second in importance only to that of the now suspect CRU.

Not satisfied with hoarding their data and manipulating the results, the conspirators acted to suppress open scientific debate by discrediting and freezing out any scientific journal that dared to publish their critics’ work. Pat Michaels, a climate scientist at the Cato Institute, told The Wall Street Journal: “This is what everyone feared. Over the years, it has become increasingly difficult for anyone who does not view global warming as an end-of-the-world issue to publish papers. This isn't questionable practice, this is unethical.”

As I have said previously on this website, when the lengths to which global warming extremists have gone to “support” their case becomes known, many scientific reputations will be ruined and climate science itself will be cast into disrepute. I had not expected the truth to be revealed so quickly or so dramatically. It is tragic, even criminal, that the mendacity of a number of bad scientists has so poisoned the debate about climate change and its possible implications. Reaction to the scandal ranges from disgust to calls for criminal prosecution and sadly, even total denial by climate change true believers.

As reported on this blog, there are a large number of serious, dedicated scientists doing good work studying Earth's climate and ecology. These scientists understand the limited scope of mankind's understanding, the lack of accurate historical data, and, above all else, the limitations of computer models. After posting “Global Warming Predictions Invalidated,” I received correspondence from a number of TRE readers asking why I was so dismissive of modeling results. Their argument was that surely—even though myriad corrections and new factors need to be added to existing models—the old models could still be considered valid. Including new factors would simply make the models more accurate, their argument went. WRONG!

Computer models are highly non-linear—a seemingly insignificant change to a model's grid scale, time step or a coefficient in an equation can have a dramatic impact on the model's output. As was demonstrated by Randin et al., changing only the grid size of a model can change the result from a loss of all plant habitats to the survival of all affected plant species. From every thing dies to everything lives without changing any of the basic assumptions present in the model.


No IPCC climate model predicted the temperature trend over the past decade.

Now consider the “travesty” of the IPCC's models not being able to predict the past decade's lack of climate warming and the truth about computer models should begin to sink in—they are not reliable predictors of real-world climate change. The only reason the old models gave answers that appeared “correct” is because those were the answers that the modelers expected. It is easy to deceive yourself when you have a billion dollar computer model to manufacture lies you already believe in. The data were “adjusted” and the models tweaked and tuned until they predicted the future their creators desired, not the future that nature would produce. I reiterate, whether trying to predict temperatures or extinctions, none of the IPCC's computer model projections are valid.

Be safe, enjoy the interglacial and stay skeptical.

Scaling Truths

As my 'name' would indicate, I am a 'former' weather forecaster, circa the early 1970's, during an 8-year stint in the USAF. It was interesting to read the bio information on both Doug & Allen -- especially since I was one of the very first field users of Allen's work on the Tiros & Nimbus satellites.

At last, we were able to view actual macro data that was more than what hand plotting of teletype reports on a synoptic map and hand analysis was able to reveal. Being able to finally correlate actual satellite data to plotted synoptic map analyses was a big break through. Can one imagine trying to forecast weather on the West Coast of the US relying on just a handful of ship reports over the Pacific Ocean without satellite images/data?

As an aside, our 'Final' test for graduation from Weather Forecasting school back then was to plot a ~2" x 4" Tornado Severe Weather forecast Warning area over a 4-state map sized ~48" x 48" based only on teletype weather reports for the previous 7-days -- Graduation required one to have at least one (1) confirmed tornado occurring within one's warning area (0.3% of total mapped area); The canned data and occurrance was based on actual weather reporting data from a previous tornadic outbreak in previous years.

More to the posting's point, though, I can attest with firsthand knowledge and experiences of how important grid scale can hide or reveal science truths. After all, "...The difference between weather and climate is a measure of time. Weather is what conditions of the atmosphere are over a short period of time, and climate is how the atmosphere "behaves" over relatively long periods of time..." http://www.nasa.gov/mission_pages/noaa-n/climate/climate_weather.html

Two specific cases I have personally experienced and was able to document through required 'Bust Analyses' of my local weather forecasts --

1. Flight weather forecasts required wind warnings whenever it was expected to exceed a specific threshold. Multiple 'Wind Forecasted busts' were later determined to be due to a low lying range of hills ~30-miles upwind of the base and wind anomometer sensor placements.

Within a certain speed range, the wind speeds would cause a 'shadow effect' to occur wherein the high winds 'overshot' the down wind sensor. Thankfully, we were able to utilize Pilot Reports on final as verification of why the wind recorders did not measure the true wind speeds that were forecast.

2. My bust of a local weather forecast calling for continued hot temps and clear skies with no thunderstorms when, in fact, we later had a severe thunderstorm that resulted in a tornado at a local gunnery range while pilots were on site in the afternoon.

In this case, there was only one isolated radiosonde (upper air) report ~800-miles Northeast (and 'thought' to be down wind) reflecting a mass of mid-level moisture that, if it were to migrate to the locale, would be sufficient to warrant a severe thunderstorm warning that afternoon.

While these isolated and historical incidents are way more on the Micro/local weather scale of scope and time intervals against global climate questions, it does serve, IMHO, to reinforce that small hidden/missed/unknown details can have profound significance.

Thus, the above posting and apparent modeling results showing that 'small' issues such as 'grid scale' can have potentially 'large' effects on the modeled outcome should serve as a clear warning to those that are ready to declare '...the debate on AGW, and the underlying science, is over'...