Citizens United Decreases Governance Effectiveness in Both Government and Business

The challenge with Citizens United vs the Federal Election Commission (Library, 2010) is how to approach it. One can view it through the lens of ethics (Silver, 2014), through which it is a terrible decision. You could examine it from an empirical standpoint and, even given the limited time since it was decided, there’s already sufficient evidence to suggest it is (further) corrupting the electoral process in the United States (Spencer & Wood, 2014). One could also view it from a governance standpoint, examining whether it will increase or decrease the efficacy of boards over their corporations (Coates IV, 2012). Since the narrow definition of the role of the board it to increase corporate value, it is fairly straightforward to measure the effects of Citizen’s United and, according to Coates, it has been surprisingly negative.
The lens through which I want to address Citizens United is that of electoral and governance accountability: Does Citizens United increase or decrease the ability to hold our elected officials and our corporations accountable for their actions? I think, in spite of some commentators claims to the contrary (Bedford, 2010; Meyer, 2012), Citizens United exacerbated the already existing problem of what Monks calls “Drone Corporations,” (Monks, 2013) those in which ownership is too diffuse to apply any power over the Board and, consequently, instilling more power in the managerial class. By giving corporate executives even more unrestrained power to lobby, bribe, hire, and otherwise influence political decision-making, it allows them to continue to tip the scales away from workers and toward themselves, and takes away more of the influence of the electorate and puts it squarely in the hands of deep-pocketed business interests.
First, electoral accountability.  Citizens United essentially takes the previous legal fiction of corporate personhood and makes it a both metaphysical and legal fact. The decision gives corporations the power to anonymously spend unlimited amounts of money influencing the electoral process based on the First Amendment’s free speech guarantees.  The managerial class has done a very effective job of increasing the amount of money that goes into their pockets and decreasing worker pay over the last 30 years (McCall & Percheski, 2010). Through their destruction of/decrease in participation in unions and conducting race-to-the-bottom labor arbitrage, the rich have become richer and the poor, poorer. While money does not determine elections, in competitive, non-incumbency races, campaign contributions have a significant effect on a candidate’s chances of winning (Erikson & Palfrey, 1998) and it buys influence and access. Thus, with corporations having significantly more money than individuals, they will be able to significantly affect elections and elected representatives will feel most accountable to them for their continued support. Thus, Citizens United takes power away from actual people and decreases electoral accountability.
Nearly as concerning is the effect it has on accountability in corporate governance. While the problems of drone boards, interlocking directorates, powerful chairmen and passive/diffused investors, all of which serve to decrease the power of the shareholder and increase the power of the managerial class existed before Citizens United, the decision gives managers increased powers in the halls of government (Monks, 1913). An excellent example of the way this effects governance is through the activity of the Business Roundtable, an organization of CEOs of large (mostly drone) corporations in fighting all “say-on-pay” provisions of the Dodd-Frank act, as toothless and merely advisory as those provisions are. Thus, by giving managers even more power, effective governance becomes ever more difficult. The managerialism that has increased in the U.S. for the past three decades (Locke & Spender, 2011) seems likely to significantly increase as a result of Citizens United. Thus, both corporations and the government move further out of our democratic control.

References
Bedford, K. (2010). Citizens United v. FEC: The Constitutional Right That Big Corporations Should Have But Do Not Want. Harvard Journal of Law & Public Policy, 34(2), p639–661.
Coates IV, J. C. (2012). Corporate Politics, Governance, and Value Before and After Citizens United. Journal of Empirical Legal Studies, 9(4), 657–696. doi:10.1111/j.1740-1461.2012.01265.x
Erikson, R. S., & Palfrey, T. R. (1998). Campaign Spending and Incumbency: An Alternative Simultaneous Equations Approach. The Journal of Politics, 60(02), 355–373. doi:10.2307/2647913
Library, H. O. U. S. S. C. Citizens United v Federal Election Commission (2010).
Locke, R., & Spender, J.-C. (2011). Confronting Managerialism: How the Business Elite and Their Schools Threw Our Lives Out of Balance (Economic Controversies). London: Zed Books.
McCall, L., & Percheski, C. (2010). Income Inequality: New Trends and Research Directions. Annual Review of Sociology, 36(1), 329–347. doi:10.1146/annurev.soc.012809.102541
Meyer, J. M. (2012). The Real Error in Citizens United. Washington and Lee Law Review, 69(4), 2171–2232.
Monks, R. A. G. (2013). Citizens DisUnited: Passive Investors, Drone CEOs, and the Corporate Capture of the American Dream. Miniver Press.
Silver, D. (2014). Business Ethics After Citizens United: A Contractualist Analysis. Journal of Business Ethics, 127(January 2014), 385–397. doi:10.1007/s10551-013-2046-y
Spencer, D., & Wood, A. (2014). Citizens United, States Divided: An Empirical Analysis of Independent Political Spending. Indiana Law Journal, 89(1), 315–372.

Book Review: The Failure of Risk Management by Douglas Hubbard

Douglass Hubbard, in The Failure of Risk Management: Why It’s Broken and How to Fix It, presents a strongly worded critique of qualitative risk management methodologies. For something that might be perceived as dry as risk management, Hubbard’s disdain for most of those who practice non-mathematical risk analysis keeps this ultimately persuasive book entertaining. While his vitriol sometimes detracts from the rhetorical effectiveness of his argument, one comes to understand his frustration at those who pedal unproven, possibly worse-than-useless risk analysis techniques. This is a field, as we’ve seen recently with both the global financial meltdown and the BP Gulf of Mexico oil spill, where failures have the potential of having effects far beyond individual firms. With all his focus on debunking common qualitative methods, though, Hubbard doesn’t stop there. He ultimately mounts a defence of quantitative modelling (so excoriated by many in the wake of the sub-prime debacle) and makes some very useful, practical suggestions on how to use qualitative models (such as Monte Carlo and Bayesian analysis) effectively.

In order to examine how successful risk management has been, Hubbard divides the approaches to risk management into four categories based on who originally devised them:

* Actuaries
* War Quants
* Economists
* Management Consultants

Hubbard sees the first three as comprising progressively more sophisticated methods of analysis. From the actuarial tables of insurance companies, to Probabilistic Risk Analysis (PRA) devised to predict failures in complex war logistics (and which lead ultimately to Monte Carlo analysis, of which Hubbard is a devotee), to the MPT (Modern Portfolio Theory) of Harry Markowitz and options theory of Black, Scholes and Merton, Hubbard sees the increasingly probabilistic yet thoroughly quantitative views of risk as being all to the good. The problem with these methods, however, is that they are not always intuitive and can be difficult to perform (anyone who has ever tried to do Monte Carlo analysis without the aid of a computer will attest to this).

This difficulty leads to what Hubbard really thinks is the real problem with risk management: management consultants. As a consultant myself, I try not to take this personally, but Hubbard’s analysis is spot-on. The ability to make people believe there is a real analysis happening, yet distilling it into easy, Powerpoint-ready chunks, is the real talent of many consultants. Drawing from his own experience as an MBA consultant with Coopers & Lybrand in 80s, he points how “structured methodologies,” which have all the appearance on being based on proven theories and which are easily graspable by non-technical senior managers, are not only doing no good for their clients but mask serious risk issues. An example of this is the typical matrix chart, such as this:

He argues that there is no evidence that these sorts of analytical tools do anything constructive, beyond giving management the feeling that something is being done about risk. He ironically offers a set of suggestions on how consultants should sell their useless wares:

Sell what feels right. Clients will not be able to differentiate a placebo effect from real value in most risk management methods. The following tricks seem to work to produce the sense of value:

  • Convert everything to a number, no matter how arbitrary. Numbers sound better to management. If you call it a score, it will sound more like golf, and it will be more fun for them.
  • As long as you have at least one testimonial from one person, you are free to use the word proven as much as you like.
  • Use lots of “facilitated workshops” to “build consensus.”
  • Build a giant matrix to “map” your procedure to other processes and standards. It doesn’t really matter what the map is for. The effort will be noticed.
  • Optional: Develop a software application for it. If you can carry on some calculation behind the scenes that they don’t quite understand, it will seem much more like magic and, therefore, more legitimate.
  • Optional: If you go the software route, generate a “spider diagram” or “bubble chart.” It will seem more like serious analysis.

The net result of this has been that the most popular risk management methodologies today are developed in complete isolation from more sophisticated risk management methods known to actuaries, engineers, and financial analysts. Whatever the flaws of some of these quantitative methods, the methods developed by the management consultants are the least supported by any theoretical or empirical analysis.

(You see what I mean about how his presentation lacks even the appearance of impartiality).

Hubbard traces the difficulty with risk management, and the ability of people to imagine that non-quantitative methods are in any way useful, to the lack of consistent definitions of the terms uncertainty, ignorance, unknowability and risk. While this might seem like an esoteric discussion for what aims to be a practical book on how to fix risk, it really does get to the heart of the matter. I have to admit that it’s refreshing to see someone take on the specialist definitions of risk because, as someone who studies risk, I’ve been confounded by the various definitions from finance, project management, general line management, and others. I remember first encountering the finance definition of risk as being equivalent with volatility when dealing with the Capital Asset Pricing Model (CAPM) doing M&A consulting and thinking, “ok, that’s a stipulative definition that I’m not familiar with but it doesn’t seem really to mean the same thing normal people mean when they use the word ‘risk’”. Hubbard eventual argues for a definition that:

  1. includes some probability of a loss
  2. involves only losses, not gains
  3. is not synonymous with volatility (outside finance)

The eventual and ultimate aim of this book is to argue in favour of quantitative and probabilistic methods of describing and analysing risk. The early parts of the book, which debunk qualitative methods and clarify the confused (and confusing) terminology in the field, are necessary to clear the way for him to advocate for methods like Monte Carlo modelling and Bayesian analysis. Along the way, however, I think he makes a very good point about the usefulness of quantitative models and their defensibility with regards to the financial crisis that sprung from the subprime debacle of 2007-2008. Since I was VP of strategy for one of the banks that was most effected by the meltdown and a direct customer of those models, I’ve spent a lot of time thinking about the models that the investment bankers used to predict the viability of certain kinds of loans. I specifically remember sitting in a meeting with one such group of bankers who were explicitly asking us for zero down, no-documentation loans, thinking “that’s crazy…there’s no reason for a no-documentation loan except to lie about something. If you’ve got income, you’ve got documentation”. When I inquired about how such a loan could possibly work, they assured me that they had mathematical models that showed that, if packaged together in a certain way that spread the risk throughout a securitised pool of loans, it ended up being safe as houses. As it were. I was new in my job so I kept quiet and figured there was just something I wasn’t getting.

But when the subprime market imploded and took the world economy with it (after I’d moved on to consulting), a great deal of criticism was pointed the way of the investment bankers and their mathematical models. People like Nassim Nicholas Taleb, (author of The Black Swan and Fooled by Randomness) and Michael Lewis (author of The Big Short and Liar’s Poker) argued that the models themselves were flawed because, among other things, they didn’t/couldn’t take into account uncertainty and randomness. In fact, Taleb argues, the models can’t possibly adequately describe risk and therefor should be scrapped in favour of more intuitive approaches that take into consideration extremely unlikely events. The blind adherence to complex mathematical models with little correlation to the real world, many people argued, is what caused the sub-prime meltdown, as it had caused the crash-and-burn of Long Term Capital Management a decade before.

It turns out that it wasn’t the models themselves that were wrong, Hubbard and others have argued persuasively. If used properly and with the right data, quantitative risk models would have been able to predict that the sub-prime market would fail. The problem was that they were using models with short-term inputs (data from the previous decade, when the real estate market did nothing but rise) and unwarranted assumptions (if there is a correction, it will be minor). As I’ve pointed out in my blog before, the chief mistake of the those investment bankers was assuming that the market would always go up. Without that assumption (which was authenticated by the data of the late 90s and early 00s), the models would have shown that there was a good chance that the sub-prime securities would go very bad, indeed.

Hubbard understands that even the most sophisticated and structurally correct risk models are susceptible to bad inputs, which is why he spends much of the later portions of the book describing how to avoid the errors in judgment and estimation that lead to such inputs. In doing so, he significantly strengthens his case for building these complex model. He even takes on some of the issue raised by behavioural economists, because if people do not act rationally, it is hard to build models suggesting that they do. While they might not act rationally, the act predictably, which is all one really needs to build models.

The conclusion of The Failure of Risk Management: Why It’s Broken and How to Fix It deals with several practical matters that help people build accurate, complete models for risk, including creating cultural and management incentives for accuracy. This is a must-read for risk professionals, especially those of us for whom Hubbard reserves his special disdain. I have already found myself returning to some of the tools I used in the past (Monte Carlo simulations) and exploring new ones (Bayesian analysis). Given the stakes we sometimes deal with, my hope is that other strategy and risk analysts will be similarly convinced to adhere to the proven methods outlined by Hubbard.