You are here

Systemic neglect -- of warnings


Anticipating Future Strategic Triple Whammies (Part #5)


[Parts: First | Prev | Next | Last | All | PDF] [Links: To-K | From-K | From-Kx | Refs ]


In endeavouring to transcend any "knee-jerk" framings of disaster and responsibilities for it, how is consideration given to the process whereby documented warnings of potential disaster were neglected, set aside or disparaged? Clear examples are offered by:

  • Indian Ocean earthquake/tsunami (2004): Smith Thammasaroj a former chief of Thailand's Meteorological Department, predicted in 1998 that a tsunami would hit tourist areas and that there would be a massive death toll. Officials did not pay attention to him until after the disaster of December 2004. A false alarm would have a very negative effect on the tourist industry, and the Thai government decided that it would forgo any early-warning system. He was forced in 1998 to retire under a shadow. After tens of thousands of deaths in the tsunami of 2004, he was reinstated in 2005 -- as minister in charge of the Thai disaster warning office.

  • Italian earthquake (2009): An Italian seismologist had predicted the disastrous earthquake in Italy in April 2009 (Seismologist predicted L'Aquila quake, Euronews, 6 April 2009). He had been reported to the authorities for spreading panic and warnings were suppressed from the web.

  • Queensland flooding  (2010/2011): One widely-cited excuse by authorities for the damage in Queensland was the "exceptional" nature of the event -- a "200-year" event, necessarily beyond any reasonable government mandate. Whether or not this figure is statistically accurate or the consequence of faulty modelling, a more correct understanding is that there is then a probability that such an event will occur once in ever 200 years. But, as noted by a citizen in one community at risk, it is then just as likely (statistically) to be repeated in a few years time -- since the "200 years" is but a statistical average over a much more extended period (Disastrous Floods as Indicators of Systemic Risk Neglect, 2011)

  • Japan earthquake/tsunami (2011): As noted by Tom Zeller Jr. (Experts Had Long Criticized Potential Weakness in Design of Stricken Reactor, Global Edition of The New York Times, 15 March 2011), warnings about the General Electric reactor design at Fukushima had begun in 1972. Warnings were ignored, as noted by Robin McKie (Japan ministers ignored safety warnings over nuclear reactors, The Guardian, 12 March 2011), notably those made by a Japanese seismologist stating specifically that such an accident was highly likely to occur (Ishibashi Katsukiko, Why Worry? Japan's Nuclear Plants at Grave Risk From Quake Damage, The Asia Pacific Journal: Japan Focus, 11 August 2007).

John Vidal argues that an untrustworthy nuclear industry, incompetently regulated, is leading the world into greater and greater danger (What will spark the next Fukushima?, The Guardian, 14 March 2011):

Even though Japan had been warned many times that possibly the most dangerous place in the world to site a nuclear power station was on its coast, no one had taken into account the double-whammy effect of a tsunami and an earthquake on conventional technology. It's easy to be wise after the event, but the inquest will surely show that the accident was not caused by an unpredictable natural disaster, but by a series of highly predictable bad calls by human regulators.

As with the framing of the Queensland floods as unforeseeable "200-year events", the Japanese disaster has been similarly framed, as noted by Mitsuyoshi Numano (Beyond expectations, International Herald Tribune, 21 March 2011):

What is hard to accept... is that the electrical power companies and government agencies tried to account for the disaster by explaining that the circumstances that led up to it were far outside the bounds of anything that could have been predicted -- in their words, "beyond all expectations". We have heard this phrase repeatedly on television reports.... But it has been obvious all along that science and technology can deal only with things that fall within the range of what can be expected.

What authoritative planning process is effectively designed to marginalize and disparage such warnings -- denying the relevance of data points or "massaging" them in support of other arguments? More intriguing is how subsequent authoritative inquiries are designed to ensure that no one is to be upheld as blameworthy in disasters such as experienced in Japan. How does "arrogance" work in justifying otherwise questionable strategic conclusions? Is the phenomenon of "arrogance" to be considered scientifically meaningless? Commenting on the Fukushima disaster, astrophysicist Satoru Ikeuchi (Arrogance of science, International Herald Tribine, 21 March 2011) cites physicist Torahiko Terada (The more civilization progresses, the greater the violence of nature's wrath) as preamble to his statement:

Scientists and engineers think they are responding to the demands of society, but they have forgotten their larger responsibilities to society, emphasizing only the positive aspects of their endeavours... Japan reached global prominence through science and technology, but we cannot deny that this has also resulted in an arrogance that has diminished our ability to imagine disaster. We have fallen into the trap of being stupefied by civilization

Should those complicit in the neglect of systemic warnings be recognized, through their risk-taking, as potentially complicit in crimes against humanity? This was a question raised with respect to the terror experienced by those exposed to the financial crisis (Extreme Financial Risk-taking as Extremism -- subject to anti-terrorism legislation? 2009).

Of potential relevance to recognition of technological arrogance and overconfidence is research cited by Michael Shermer (Financial Flimflam: why economic experts' predictions fail, Scientific American, March 2011), namely that of the self-deception among professional prognosticators as investigated by Philip E. Tetlock (Expert Political Judgment, 2005):

There was one significant factor in greater prediction success, however, and that was cognitive style: 'foxes' who know a little about many things do better than 'hedgehogs' who know a lot about one area of expertise. Low scorers, Tetlock wrote, were 'thinkers who 'know one big thing,' aggressively extend the explanatory reach of that one big thing into new domains, display bristly impatience with those who 'do not get it,' and express considerable confidence that they are already pretty proficient forecasters.' High scorers in the study were 'thinkers who know many small things (tricks of their trade), are skeptical of grand schemes, see explanation and prediction not as deductive exercises but rather as exercises in flexible 'ad hocery' that require stitching together diverse sources of information, and are rather diffident about their own forecasting prowess.'

One clear factor is the pressure to define the focus of a technology sufficiently narrowly in terms of its effects over time, on the environment, on employment, and on other sectors. From a broader systemic perspective, this could be recognized as being completely unscientific, asystemic and irresponsible -- except in the sense of responding with the utmost methodological care (beyond any possible criticism) within a pre-defined boundary. This approach could be named pejoratively as "conceptual gerrymandering" -- namely choosing the boundaries to accord with the strategic commitment, and avoiding any challenge to it.


[Parts: First | Prev | Next | Last | All | PDF] [Links: To-K | From-K | From-Kx | Refs ]