Weathering extremes: what goes around comes around

Some brutal storms over the past year or so, such as the recent one that dropped 1-2 feet of snow from Tulsa to Chicago and beyond with sometimes hurricane force winds, have been labeled with all sorts of monikers, “Frankenstorm”, “snowmageddon”, etc.,  to emphasize how bad, and perhaps, how unique they were.  Some incautious observers have assigned such events to signs of global warming.  Moreover,  there have been seemingly oxymoronic,  perhaps ad hoc statements due to recent record cold spells that purport that it will be getting colder as it gets warmer (that is, we’ll have more severe cold winters as global warming progresses).

The impact of global warming to date is “relatively” slight, and no one can discern that a particular flood, typhoon, tornado, drought, that cloud over there, etc.,  was due to global warming.

We meteorologists know that “what goes around comes around”; that the “50 year”, the “100 year” floods will recur.  Namely, we know that extreme events will occur without the need to implicate global warming.

Furthermore, proxy climate records, such as tree rings that are rather good at delineating past droughty and wet periods–they are problematic in reconstructing temperature–but,  you can get quite a good handle on the precipitation regimes of the past few hundred years.  These, too, can tell us about the extremes of past climate over hundreds of years, and therefore, what to expect in the future sans global warming effects.

Perhaps one of the most important papers published in this proxy climate domain in this writer’s opinion was in 1994.  It was a study of rainfall epochs deduced from tree rings in central California by Haston and Michaelsen, published in J. Climate.  It is fortunate that this study was published before the global warming “media blitz” in which otherwise reasonable people/media assign all kinds of anomalous weather events to signs of global warming.

What was the main conclusion of that J. Climate paper regarding rainfall regimes over the past 600 years in California???

It was astonishing.

The authors concluded that the California water retention and flood control infrastructure had been built based on an unusually low degree of climate variability during the instrumental record, largely confined to the period after 1900.  The longer tree ring record, however,  CLEARLY indicated that much LARGER fluctuations in the rainfall regimes of California had occurred prior to the instrumental record.   These findings led the authors to suggest that California was not likely to be well prepared for the floods and droughts of the future since it can be assumed that larger variability in rainfall found in the past will occur in the future.

The record rains of the 1997-98 El Nino and 2004-2005 rainfall seasons accompanied by an almost unheard of water flows into the basins of Death Valley in 2005,and  the “unprecedented” drought of the 2001-2002 rainfall season in which some coastal southern California sites received less than 2 inches (!) were largely foretold by those 1994 findings.   Moreover, due to the “teleconnection” aspect, the larger climate variance found in central California prior to record keeping can be expected to have repercussions in the adjacent states.

What “goes around”, has already begun to come around.

But in today’s world, these anomalous weatther events will not be seen as just,  “what goes around comes around”, but rather will be labeled en toto as evidence of the pernicious effects of global warming.

That’s just plain wrong, and most meteorologists understand this.

The extreme events of late, if you are onboard the GW bandwagon, could reasonably have been said to have been “tweaked” by GW at best.  Perhaps without GW, that snowstorm in Chicago would have dropped “only” 18.3 inches instead of 20  associated with overall slightly higher temperatures and the attendant enhanced moisture content.

The late Prof. Joanne Simpson, former president of the American Meteorological Society, warned, in the early days of global warming claims in 1989,  claims that many scientists were dubious of at that time, about the dangers of exaggeration.  She recounted her experiences with the exaggerated claims promulgated in the cloud seeding domain in which she worked.   In her talk at the Conference on Statistics and Probabilty in the Atmosphere, Monterrey, 1989, and as President-elect of the American Meteorological Society, Dr. Simpson warned:

“Lacking that lesson, our community has once again stumbled into the weather modification paradox concerning global warming-where again we may be damaging our credibility again for the same basic reason.

“What is the weather modification paradox?  It is the tendency to exaggerate man-made alterations to the atmosphere owing to the great difficulty in distinguishing definitively between natural variability in the system and anthropogenic effects-whether the perceived man-made change is small-scale rain produced produced by intentional cloud seeding or whether in it long-range global warming as a by-product of industry and agriculture.”

and, near the end of her talk that day, she re-emphasized this point:

“While it is not entirely clear what the decision makers of the world can and should do, I hope at least that we meteorologists have learned some hard lessons.  I hope that we have learned enough from the harm that we and our colleagues have caused over the years by exaggerated claims and exaggerated scare stories.  I hope that we will be more cautious in how we express ourselves, especially to the media–that is a difficult challenge to say the least.”

Joanne Simpson was not too skeptical about a global warming future, but she was concerned about how we spoke to the public about it.

Amen.

PS:  Prof. Simpson’s full address, which she provided to me soon after it was delivered, can be found here:  all of Joanne Simpson’s banquet talk wx mod and GW_1989.

————————————————————————————————————————————————————————————————-

BTW, and unexpectedly, global temperatures have stabilized over the past 10 years or so in spite of continuing increases in CO2, as many of you knowWhat happened to GW_Sci_Oct 2, 2009.  One explanation posited for this is a drying of the stratosphere, something that would allow more of the earth’s heat to escape into space–Solomon et al. 2010,  Science). Another explanation for at least part of this “stabilization” arises from an climate model using recent ocean current data.  The output from this model predicted that cooling of the northern hemisphere continents was due to a recent slowing of Atlantic Ocean currents, and furthermore, that this slowing and continental cool spell was likely to last 10-20 more years (Keenleyside et al. 2008 in the journal Nature, summarized by Richard Kerr in Science).  Finally, we have an aerosol “wild card” out there. Aerosols are thought to have the net effect of lowering global temperatures, but the models used by the IPCC4 were only able to crudely parameterize those effects.  One down-sized climate model (20-25 km grid spacing instead of 200-250 km), in preliminary runs has suggested a larger role for aerosols in cooling the planet, a kind of inadvertent “geoengineering.” (These latter results have not been published that I know of, and so this comment can only be considered little more than gossip at this time, but it was from a good “insider” source.)

However, imagine how pathetic such a smoggier world would be, with smog everywhere, views of the Grand Canyon mucked up, even thin, smog-laden stratocu looking dark and ugly on the bottom as as more light was reflected back into space from their tops due to smaller drops, etc.   Getting upset here even thinking about how awful that smoggier, less warm, world would be!  Don’t “geoengineer” in this way!!!

 

Montford’s The Hockey Stick Illusion, p269: climate change meets cloud seeding

“McIntyre’s first step in trying to replicate a paper was to collate the data.   While data might be cited correctly and accurately in the papers, it was always possible that what had been used was different in some way to the official versions, whether due to an error in the archive or one made by the authors.”

Almost at every turn in this monumental exposé by A. W. Montford, I see parallels in the many cloud seeding reanalyses I did at the University of Washington with Peter Hobbs.  The two sentences quoted above from Montford’s book, so fundamental a step in checking results, literally leapt off the page since that is exactly where the most basic replication starts, and where we always began in our cloud seeding (CS) reanalyses.

In our re-evaluation of perhaps the most important randomized wintertime cloud seeding experiments ever conducted, those at Climax, Colorado, 1961-1970, we started with the raw data that the experimenters said they had used.  This was precipitation measurements at the cloud seeding target gage that were taken by an independent organization and archived by NOAA, thus making it publicly available.  The experimenters high lighted this independence in their publications.

But when those values from NOAA were used in the re-evaluation of those Climax experiments, discrepancies were found, just as Montford reports that Steve McIntyre found so often in his proxy raw data examinations.   In our case, the seeded days generally had more snow in the experimenters’ data at the NOAA target gage, and control days less than was actually the case according to the NOAA data.   Furthermore,  these discrepancies were only observed in the second “confirmatory” experiment (1966-1970) on the days that were supposed to respond the most to seeding.  In our re-evaluation of the second experiment (aka, Climax II) the use of the NOAA precipitation values, along with other data corrections,  degraded the results so badly that they did not confirm the first five season experiment after all.   The experimenters had previously reported that Climax II had been a confirmation of the first (aka, Climax I) experiment.

As one might imagine, the initial reports of a the “confirmation” of the earlier “exploratory” CS experiment gave those two experiments together a great deal of caché as strong evidence that snow could be increased on a determinant basis through wintertime cloud seeding.   And they were cited as having done so by the prestigious National Academy of Sciences Panel on Climate and Weather Modification in 1974.    (As an interesting aside, the NAS Panel was also concerned at that time about the “…recent equatorward shift in ice boundaries.”)

Further work “de-constructing” those experiments at Climax, that is, the discovery of more discrepancies, can be found here.

Eventually the experimenters acknowledged the source of their errors in precipitation at the target gage in a journal exchange in 1995.

Epilogue

However, unlike the situation that McIntyre repeatedly encounters in Montford’s HSI account, where climate researchers refuse to honor requests for raw data, in our re-analysis of cloud seeding experiments in Colorado, the experimenters at Colorado State University were totally cooperative in supplying data that was occasionally requested by the present writer.  They did this even though they KNEW that the requestor was a critic/skeptic, might challenge their earlier results.  To their great credit, the ideals of science were given a higher priority than their egos by those at CSU and that finding problems and discrepancies were recognized as a way of advancing science, not hindering it.

 

 

The FAA and the Ideals of Science


 


Today, its not unusual to see researchers publishing seemingly important findings in journals accompanied by a global news release at the time the article appears.  At this point, such research has perhaps been reviewed prior to journal publication by only several individuals.

However, it has become fairly common for researchers asscociated with globally impactful findings to withhold methodologies that led to them.   The natural result, particularly in the climate change domain, is a firestorm since critics in that research domain are SURE that there are misdeeds or errors due to “confirmation bias.”

An example in point is the important “Hockey Stick” paper by Mann et al. 1998 (Nature), one that was to have tremendous influence before it could be checked by outsiders on how exactly it came about.  This paper showed a sharp rise in global temperatures during the past 30-40 years, one commensurate with the thought that rising CO2 concentrations were already having a noticeable effect on global temperatures.   Eventually, a number of errors were found in this paper, and the Hockey Stick, as presented, was thrown in doubt.  It should be kept in mind, however, that just because the original paper was flawed, that there is not going to be such a rise–the author tends to agree with this proposition that global temperatures will gradually rise in the future.

This long, tortured chapter involving the “Hockey Stick” should not have happened.   It was clearly due to the original researchers believing that their results were too important for others to learn how they got them.   Sadly, in this writer’s opinion, it is a position of the National Science Foundation well that researchers can hide their methodologies and the exact data they used under a “proprietary” umbrella.  A scientific horror story in concerning the Hockey Stick has been laid out in detail by A. W. Montford, in “The Hockey Stick Illusion”, a book I highly recommend.

Withholding methodologies and data suggests that something is wrong with the outcome of the research, and, furthermore, is anti-science.  Imagine, a lab announces that it has cured cancer, but can’t tell us exactly how they did it, and so no one can replicate their results!  In the domain of medicine, this would be a ludicrous, surreal example; it wouldn’t happen.  It should not happen in the important climate change domain,  either.

On the other hand, the view that opening the door to skeptics of your work can lead to a lessening of conflict in research domains, and even more likely,  an improvement in the robustness of the orignal work, is one that is shared by numerous scientists.

Who among us as science workers, is so arrogant that we think our work cannot be improved upon?

While we depend on peer-review to catch errors, it has been this writer’s experience that hundreds of pages of peer-reviewed literature in the domain of cloud seeding research can reach the journals and stand untouched, uncritiqued for years at a time.   This is because peer-review in conflictive environments can easily fail with soft reviews by advocates of the conclusions being reached in a manuscript.  No scientist reading this doubts this.

The Federal Aviation Administration is fully aware of the hazards of “soft reviews”.  The attached statement at left concerning work on the writer’s former research aircraft at the University of Washington might well be a metaphor for our science environment.   “….there will be a paper trail.”   “…there will be an inspection by someone other than the person doing the work.”

We all know that these kinds of rules established by the FAA is to protect us from plane crashes.   But imagine, that there is no “paper trail”, no documentation of what’s in and what’s off the aircraft!  That’s how we get journal “plane crashes.”

It is the same with journal articles on scientific results.  How our results were arrived at is mandatory for purposes of replication, of which the first, most basic step is to use exactly the methodologies and data that the original researcher (s) claimed they used and see if you get the same result.