Reply to comment
Supposedly, human activity is responsible for the detected rise in atmospheric CO2 levels over the past century. But do we really know were gas emissions come from and how great they are? As it turns out, greenhouse gas emissions are measured using statistical data without testing the results against the actual increases of these gases in the atmosphere. Regardless, climate change alarmists insist that human emissions must be reduced. A revealing perspective article in the June 4, 2010, issue of Science states “this is like dieting without weighing oneself.” Currently, science is only guessing at where CO2 emissions come from.
Scientists are coming to the realization that claims about greenhouse gas emissions can have integrity only if verified by direct atmospheric measurements. Emissions data are produced by greenhouse gas emitters of all sizes—farms, factories and entire nations. These emissions are often quoted with high precision but, as Euan Nisbet and Ray Weiss state in their article, “misreporting still occurs, whether by simple error, ignorance, or intention.” They claim that carbon-equivalent emissions are currently assessed by “bottom-up” methods, which are made up from a variety of local statistics such as fuel consumption or numbers of cows. When these measured “bottom-up” emissions are rolled up to a global scale, the amounts can disagree by factors of two or more when compared with direct atmospheric measurements.
How can you control GHG emissions when you cannot accurately identify their sources? And how can you blame the rise in atmospheric CO2 solely on humanity if you cannot reconcile actual emissions with atmospheric measurements? The answer is that you cannot. To try and shore up the case for emissions control—including all those calls for “cap and trade” and a carbon tax—the authors want to establish a global network to provide a “top down” assessment of anthropogenic emission.
To carry out "top-down" assessment, that is, using atmospheric understanding to quantify emissions, an approach that integrates several methods is needed. First, the atmosphere must be measured at high spatial and temporal resolution via networks of ground-based stations and aircraft. Second, remote sensing is needed, both from satellites to give global coverage and from the ground to calibrate the satellite data. Third, modeling synthesizes the results and assesses budgets. As the data collection network and data interpretation through modeling improve, we can begin to envision their use to test and validate bottom-up inventories.
Such a network is much easier to envision than to actually build, and interpreting the data collected by such a network is even harder. Most emissions sources and many sinks are on or near the ground, in what scientists call the atmospheric boundary layer. While gases in the atmosphere soon become “well mixed” and fairly uniform world-wide, local conditions can vary greatly.
For example, the average global CO2 level is currently about 388 parts per million. But this can be significantly reduced in some locations by the springtime growth spurt of deciduous trees or dramatically increased during rush hour in major cities. To obtain long-term local data, air must be collected from instruments that continuously record regional greenhouse gas variations. Current records are spotty and incomplete at best.
The solution, claim Nisbet and Weiss, is better monitoring. Aircraft studies are important, as are satellites and terrestrial sensors. New models will have to be developed and new factors incorporated into the existing crop of wonky GCM. There is also some basic science to be done: for CO2, much needs to be done to quantify biological fluxes. The list of tools, activities and improvements goes on and on. But all of these instruments and people to staff them will cost money.
Climate science wants more airborne sensors.
“[C]ompared to the scale of the climate problem, in situ measurements, and even satellites, are relatively small investments,” insist the authors. “But verification of emissions demands a sustained multiyear effort that can be anathema to current approaches to research support, especially as the work has a strong discovery component.” In short, they need greater funding on a long-term basis.
This always seems to be the result when climate scientists are asked for more accurate predictions. The IPCC and climate change alarmists say they are confident that human emissions will have an effect on future climate. But, when asked to be more specific, the data are not accurate enough and the probabilities for specific outcomes become unreliable.
The solution, they say, is to send more money. More money for more instruments, more money for more studies, and more money for more computer models. In the meantime, governments and the public are expected to take concrete actions to curb GHG emissions based on climate science's self-professed inaccurate predictions. They guess and everyone else sacrifices.
This goes back to the three pillars of climate science: incomplete theory, erroneous computer models and inaccurate data. Here is an admission that the connection between human activity and CO2 emissions has not been accurately measured. Climate scientists have inferred mankind's contribution from measurements in overall atmospheric content, blaming humanity for the sharp rise in GHG levels even though there is considerable evidence that levels of such gases have been higher during previous interglacials than they are today. New sources and new sinks are being discovered every day, and nature's response to changing CO2 levels is to change itself. No wonder climate scientists haven't guessed right yet.
Be safe, enjoy the interglacial and stay skeptical.