Stanford expert: 'Black swans' and 'perfect storms' become lame excuses for bad risk management
Instead of reflecting on the unlikelihood of rare catastrophes after the fact, Stanford risk analysis expert Elisabeth Paté-Cornell prescribes an engineering approach to anticipate them when possible, and to manage them when not.
The terms "black swan" and "perfect storm" have become part of the public vocabulary for describing disasters ranging from the 2008 meltdown in the financial sector to the terrorist attacks of Sept. 11, 2001. But according to Elisabeth Paté-Cornell, a Stanford professor of management science and engineering, people in government and industry are using these terms too liberally in the aftermath of a disaster as an excuse for poor planning.
Her research, published in the November issue of the journal Risk Analysis, suggests that other fields could borrow risk analysis strategies from engineering to make better management decisions, even in the case of once-in-a-blue-moon events where statistics are scant, unreliable or non-existent.
Paté-Cornell argues that a true "black swan" – an event that is impossible to imagine because we've known nothing like it in the past – is extremely rare. The AIDS virus is one of very few examples. Usually, there are important clues and warning signs of emerging hazards (e.g., a new flu virus) that can be monitored to guide quick risk management responses.
The attacks of 9/11 were not black swans, she said. The FBI knew that questionable people were taking ﬂying lessons on large aircraft. And a group of terrorists seemed to have had a similar plan in 1994, when they took over in Algiers, Algeria, an Air France aircraft bound for Paris.
Similarly, she argues that the risk of a "perfect storm," where multiple forces join to create a disaster greater than the sum of its parts, can be assessed in a systematic way before the event because even though their conjunctions are rare, the events that compose them – and all the myriad events that are dependent on them – have been observed in the past.
"Risk analysis is not about predicting anything before it happens, it's just giving the probability of various scenarios," she said. She argues that systematically exploring those scenarios can help companies and regulators make smarter decisions before an event in the face of uncertainty.
Think like an engineer
An engineering risk analyst thinks in terms of systems, their functional components and their dependencies, Paté-Cornell said. For instance, in many plants that require cooling, generators, turbines, water pumps, safety valves and more all contribute to making the system work. Therefore, the analyst must first understand the ways in which the system works as a whole in order to identify how it could fail. The same method applies to medical, financial or ecological systems.
In the case of a nuclear plant, the seismic activity or the potential for tsunamis in the area must be part of the equation, particularly if local earthquakes have historically led to tidal waves and destructive flooding. Paté-Cornell noted that the designers of the Fukushima Daiichi nuclear power plant ignored important historical precedents, including earthquakes in 869 and 1611 that generated waves similar to those witnessed in March 2011.
Paté-Cornell says that a systematic approach is also relevant to human aspects of risk analysis.
"Some argue that in engineering you have hard data about hard systems and hard architectures, but as soon as you involve human beings, you cannot apply the same methods due to the uncertainties of human error. I do not believe this is true," she said.
In fact, Paté-Cornell and her colleagues have long been incorporating "soft" elements into their systems analysis to calculate the probability of human error. They look at all the people with access to the system and factor in any available information about past behaviors, training and skills.
Paté-Cornell has found that human errors, far from being unpredictable, are often rooted in the way an organization is managed. "We look at how the management has trained, informed and given incentives to people to do what they do and assign risk based on those assessments," she said.
A proven approach
Paté-Cornell has successfully applied this approach to the field of finance, where she has estimated the probability that an insurance company would fail given its age and its size. She said companies funded her research because they needed forward-looking models that their financial analysts generally did not provide. Traditional financial analysis, she said, is based on evaluating existing statistical data about past events. In her view, analysts can better anticipate market failures – like the financial crisis that began in 2008 – by recognizing precursors and warning signs, and factoring them into a systemic probabilistic analysis.
Medical specialists must also make decisions in the face of limited statistical data, and Paté-Cornell says the same approach is useful for calculating patient risk.
She used systems analysis to assess data about anesthesia accidents – where human mistakes can create an accident chain that, if not recognized quickly, puts the patient's life in danger. Based on her results, she suggested retraining and recertification procedures for anesthesiologists to make their system safer.
"Lots of people don't like probability because they don't understand it," she said, "and they think if they don't have hard statistics, they cannot do a risk analysis." In fact, we generally do a system-based risk analysis because we do not have reliable statistics about the performance of the whole system.
Kelly Servick is a science-writing intern at the Stanford University School of Engineering.
Elisabeth Paté-Cornell, Management Science & Engineering: (650) 723-3823, [email protected]
Andrew Myers, School of Engineering: (650) 736-2245, [email protected]
Dan Stober, Stanford News Service: (650) 721-6965, [email protected]