Douglass Hubbard, in The Failure of Risk Management: Why It’s Broken and How to Fix It, presents a strongly worded critique of qualitative risk management methodologies. For something that might be perceived as dry as risk management, Hubbard’s disdain for most of those who practice non-mathematical risk analysis keeps this ultimately persuasive book entertaining. While his vitriol sometimes detracts from the rhetorical effectiveness of his argument, one comes to understand his frustration at those who pedal unproven, possibly worse-than-useless risk analysis techniques. This is a field, as we’ve seen recently with both the global financial meltdown and the BP Gulf of Mexico oil spill, where failures have the potential of having effects far beyond individual firms. With all his focus on debunking common qualitative methods, though, Hubbard doesn’t stop there. He ultimately mounts a defence of quantitative modelling (so excoriated by many in the wake of the sub-prime debacle) and makes some very useful, practical suggestions on how to use qualitative models (such as Monte Carlo and Bayesian analysis) effectively.
In order to examine how successful risk management has been, Hubbard divides the approaches to risk management into four categories based on who originally devised them:
* Actuaries
* War Quants
* Economists
* Management Consultants
Hubbard sees the first three as comprising progressively more sophisticated methods of analysis. From the actuarial tables of insurance companies, to Probabilistic Risk Analysis (PRA) devised to predict failures in complex war logistics (and which lead ultimately to Monte Carlo analysis, of which Hubbard is a devotee), to the MPT (Modern Portfolio Theory) of Harry Markowitz and options theory of Black, Scholes and Merton, Hubbard sees the increasingly probabilistic yet thoroughly quantitative views of risk as being all to the good. The problem with these methods, however, is that they are not always intuitive and can be difficult to perform (anyone who has ever tried to do Monte Carlo analysis without the aid of a computer will attest to this).
This difficulty leads to what Hubbard really thinks is the real problem with risk management: management consultants. As a consultant myself, I try not to take this personally, but Hubbard’s analysis is spot-on. The ability to make people believe there is a real analysis happening, yet distilling it into easy, Powerpoint-ready chunks, is the real talent of many consultants. Drawing from his own experience as an MBA consultant with Coopers & Lybrand in 80s, he points how “structured methodologies,” which have all the appearance on being based on proven theories and which are easily graspable by non-technical senior managers, are not only doing no good for their clients but mask serious risk issues. An example of this is the typical matrix chart, such as this:
He argues that there is no evidence that these sorts of analytical tools do anything constructive, beyond giving management the feeling that something is being done about risk. He ironically offers a set of suggestions on how consultants should sell their useless wares:
Sell what feels right. Clients will not be able to differentiate a placebo effect from real value in most risk management methods. The following tricks seem to work to produce the sense of value:
- Convert everything to a number, no matter how arbitrary. Numbers sound better to management. If you call it a score, it will sound more like golf, and it will be more fun for them.
- As long as you have at least one testimonial from one person, you are free to use the word proven as much as you like.
- Use lots of “facilitated workshops” to “build consensus.”
- Build a giant matrix to “map” your procedure to other processes and standards. It doesn’t really matter what the map is for. The effort will be noticed.
- Optional: Develop a software application for it. If you can carry on some calculation behind the scenes that they don’t quite understand, it will seem much more like magic and, therefore, more legitimate.
- Optional: If you go the software route, generate a “spider diagram” or “bubble chart.” It will seem more like serious analysis.
The net result of this has been that the most popular risk management methodologies today are developed in complete isolation from more sophisticated risk management methods known to actuaries, engineers, and financial analysts. Whatever the flaws of some of these quantitative methods, the methods developed by the management consultants are the least supported by any theoretical or empirical analysis.
(You see what I mean about how his presentation lacks even the appearance of impartiality).
Hubbard traces the difficulty with risk management, and the ability of people to imagine that non-quantitative methods are in any way useful, to the lack of consistent definitions of the terms uncertainty, ignorance, unknowability and risk. While this might seem like an esoteric discussion for what aims to be a practical book on how to fix risk, it really does get to the heart of the matter. I have to admit that it’s refreshing to see someone take on the specialist definitions of risk because, as someone who studies risk, I’ve been confounded by the various definitions from finance, project management, general line management, and others. I remember first encountering the finance definition of risk as being equivalent with volatility when dealing with the Capital Asset Pricing Model (CAPM) doing M&A consulting and thinking, “ok, that’s a stipulative definition that I’m not familiar with but it doesn’t seem really to mean the same thing normal people mean when they use the word ‘risk’”. Hubbard eventual argues for a definition that:
- includes some probability of a loss
- involves only losses, not gains
- is not synonymous with volatility (outside finance)
The eventual and ultimate aim of this book is to argue in favour of quantitative and probabilistic methods of describing and analysing risk. The early parts of the book, which debunk qualitative methods and clarify the confused (and confusing) terminology in the field, are necessary to clear the way for him to advocate for methods like Monte Carlo modelling and Bayesian analysis. Along the way, however, I think he makes a very good point about the usefulness of quantitative models and their defensibility with regards to the financial crisis that sprung from the subprime debacle of 2007-2008. Since I was VP of strategy for one of the banks that was most effected by the meltdown and a direct customer of those models, I’ve spent a lot of time thinking about the models that the investment bankers used to predict the viability of certain kinds of loans. I specifically remember sitting in a meeting with one such group of bankers who were explicitly asking us for zero down, no-documentation loans, thinking “that’s crazy…there’s no reason for a no-documentation loan except to lie about something. If you’ve got income, you’ve got documentation”. When I inquired about how such a loan could possibly work, they assured me that they had mathematical models that showed that, if packaged together in a certain way that spread the risk throughout a securitised pool of loans, it ended up being safe as houses. As it were. I was new in my job so I kept quiet and figured there was just something I wasn’t getting.
But when the subprime market imploded and took the world economy with it (after I’d moved on to consulting), a great deal of criticism was pointed the way of the investment bankers and their mathematical models. People like Nassim Nicholas Taleb, (author of The Black Swan and Fooled by Randomness) and Michael Lewis (author of The Big Short and Liar’s Poker) argued that the models themselves were flawed because, among other things, they didn’t/couldn’t take into account uncertainty and randomness. In fact, Taleb argues, the models can’t possibly adequately describe risk and therefor should be scrapped in favour of more intuitive approaches that take into consideration extremely unlikely events. The blind adherence to complex mathematical models with little correlation to the real world, many people argued, is what caused the sub-prime meltdown, as it had caused the crash-and-burn of Long Term Capital Management a decade before.
It turns out that it wasn’t the models themselves that were wrong, Hubbard and others have argued persuasively. If used properly and with the right data, quantitative risk models would have been able to predict that the sub-prime market would fail. The problem was that they were using models with short-term inputs (data from the previous decade, when the real estate market did nothing but rise) and unwarranted assumptions (if there is a correction, it will be minor). As I’ve pointed out in my blog before, the chief mistake of the those investment bankers was assuming that the market would always go up. Without that assumption (which was authenticated by the data of the late 90s and early 00s), the models would have shown that there was a good chance that the sub-prime securities would go very bad, indeed.
Hubbard understands that even the most sophisticated and structurally correct risk models are susceptible to bad inputs, which is why he spends much of the later portions of the book describing how to avoid the errors in judgment and estimation that lead to such inputs. In doing so, he significantly strengthens his case for building these complex model. He even takes on some of the issue raised by behavioural economists, because if people do not act rationally, it is hard to build models suggesting that they do. While they might not act rationally, the act predictably, which is all one really needs to build models.
The conclusion of The Failure of Risk Management: Why It’s Broken and How to Fix It deals with several practical matters that help people build accurate, complete models for risk, including creating cultural and management incentives for accuracy. This is a must-read for risk professionals, especially those of us for whom Hubbard reserves his special disdain. I have already found myself returning to some of the tools I used in the past (Monte Carlo simulations) and exploring new ones (Bayesian analysis). Given the stakes we sometimes deal with, my hope is that other strategy and risk analysts will be similarly convinced to adhere to the proven methods outlined by Hubbard.