Anticipating Future Strategic Triple Whammies (Part #5)
[Parts: First | Prev | Next | Last | All | PDF] [Links: To-K | From-K | From-Kx | Refs ]
John Vidal argues that an untrustworthy nuclear industry, incompetently regulated, is leading the world into greater and greater danger (What will spark the next Fukushima?, The Guardian, 14 March 2011):
Even though Japan had been warned many times that possibly the most dangerous place in the world to site a nuclear power station was on its coast, no one had taken into account the double-whammy effect of a tsunami and an earthquake on conventional technology. It's easy to be wise after the event, but the inquest will surely show that the accident was not caused by an unpredictable natural disaster, but by a series of highly predictable bad calls by human regulators.
As with the framing of the Queensland floods as unforeseeable "200-year events", the Japanese disaster has been similarly framed, as noted by Mitsuyoshi Numano (Beyond expectations, International Herald Tribune, 21 March 2011):
What is hard to accept... is that the electrical power companies and government agencies tried to account for the disaster by explaining that the circumstances that led up to it were far outside the bounds of anything that could have been predicted -- in their words, "beyond all expectations". We have heard this phrase repeatedly on television reports.... But it has been obvious all along that science and technology can deal only with things that fall within the range of what can be expected.
What authoritative planning process is effectively designed to marginalize and disparage such warnings -- denying the relevance of data points or "massaging" them in support of other arguments? More intriguing is how subsequent authoritative inquiries are designed to ensure that no one is to be upheld as blameworthy in disasters such as experienced in Japan. How does "arrogance" work in justifying otherwise questionable strategic conclusions? Is the phenomenon of "arrogance" to be considered scientifically meaningless? Commenting on the Fukushima disaster, astrophysicist Satoru Ikeuchi (Arrogance of science, International Herald Tribine, 21 March 2011) cites physicist Torahiko Terada (The more civilization progresses, the greater the violence of nature's wrath) as preamble to his statement:
Scientists and engineers think they are responding to the demands of society, but they have forgotten their larger responsibilities to society, emphasizing only the positive aspects of their endeavours... Japan reached global prominence through science and technology, but we cannot deny that this has also resulted in an arrogance that has diminished our ability to imagine disaster. We have fallen into the trap of being stupefied by civilization
Should those complicit in the neglect of systemic warnings be recognized, through their risk-taking, as potentially complicit in crimes against humanity? This was a question raised with respect to the terror experienced by those exposed to the financial crisis (Extreme Financial Risk-taking as Extremism -- subject to anti-terrorism legislation? 2009).
Of potential relevance to recognition of technological arrogance and overconfidence is research cited by Michael Shermer (Financial Flimflam: why economic experts' predictions fail, Scientific American, March 2011), namely that of the self-deception among professional prognosticators as investigated by Philip E. Tetlock (Expert Political Judgment, 2005):
There was one significant factor in greater prediction success, however, and that was cognitive style: 'foxes' who know a little about many things do better than 'hedgehogs' who know a lot about one area of expertise. Low scorers, Tetlock wrote, were 'thinkers who 'know one big thing,' aggressively extend the explanatory reach of that one big thing into new domains, display bristly impatience with those who 'do not get it,' and express considerable confidence that they are already pretty proficient forecasters.' High scorers in the study were 'thinkers who know many small things (tricks of their trade), are skeptical of grand schemes, see explanation and prediction not as deductive exercises but rather as exercises in flexible 'ad hocery' that require stitching together diverse sources of information, and are rather diffident about their own forecasting prowess.'
One clear factor is the pressure to define the focus of a technology sufficiently narrowly in terms of its effects over time, on the environment, on employment, and on other sectors. From a broader systemic perspective, this could be recognized as being completely unscientific, asystemic and irresponsible -- except in the sense of responding with the utmost methodological care (beyond any possible criticism) within a pre-defined boundary. This approach could be named pejoratively as "conceptual gerrymandering" -- namely choosing the boundaries to accord with the strategic commitment, and avoiding any challenge to it.
[Parts: First | Prev | Next | Last | All | PDF] [Links: To-K | From-K | From-Kx | Refs ]