
Business analytics model risk (part 2 of 5): saving the kingdom, one nail at a time…
Following from article 1 of 5 on Business Analytics Model Risk
Link to introductory header article (0 of 5)
For want of a nail the shoe was lost, for want of a shoe the horse was lost; and for want of a horse the rider was lost; being overtaken and slain by the enemy, all for want of care about a horse-shoe nail. – Benjamin Franklin ‘The Way to Wealth’ (1758)The parable ‘for want of a nail’ casts back to the Middle Ages, having been linked to Richard III’s unhorsing and defeat. Its subsequent use has become synonymous with a general observation concerning both the interconnectedness of all things and the rule of unintended consequences, especially in their joint manifestation otherwise known as chaos theory’s ‘butterfly effect’ http://en.wikipedia.org/wiki/Butterfly_effect.
We can consider this adage an apt and illustrative admonishment concerning the general risk of ‘poor models’ leading to poor decision making consequences. Models, especially as encoded in enterprise IT decision support systems (DSSs), are, via the proliferation of analytics-driven management, fast becoming the crux of organizational decision making.
Here we address this phenomenon by attempting to catalog the distinct types of business analytics model risk factors faced by firms. Based on the principle that we cannot manage what cannot be measured (or specified discretely), this article attempts to sketch a ‘catalog’ or categorization of issues and items related to business analytics decision model risk. Whereas model risk is often discussed in the singular, in practice it is encountered in multiplicity: there are a number of both functional and methodological sources of model risk, some of which overlap, but which benefit from being broken out and distinguished separately to improve apprehension.
Model risk is “the possibility that a financial institution suffers losses due to mistakes in the development and application of valuation models” (Morini, 2011). The finance industry version of the nail parable becomes something along the lines of: ‘for want of a morning coffee the calculation was off, for want of a calculation the spreadsheet was wrong, for want of a spreadsheet the traders were lost, for want of the traders the market was lost, all for want of care for morning coffee’.
While discussed most hotly by banking and investment institutions, model risk applies equally, and is of equal concern to, broader enterprise decision making. Firms from Wal-Mart to Disney to Jet Blue are ardent and growing users of analytics-based decision models to drive and improve their core businesses. Although most all business models ultimately come down to, or imply, financial outcomes, models are deployed to address an array of problems, from optimizing machine utilization in an assembly line, to reducing customer complaints / increasing satisfaction, to targeting advertizing for particular communities, to making crucial personnel decisions.
Thus, while banking and high-finance has assumed the banner charge on model risk, the broad topic applies to business analytics decision models more generally. High-finance discussions of model risk quickly, and rightly, descend into industry-specific minutia quite quickly: volatility smiles, payoff assumptions, capital structure arbitrage, and Libor swaptions. Such topics are important and significant to high finance, but are comprehensive mainly to specialists and vested parties.
We propose here that model risk has a broader and growing extra-banking industry context: that of the increasingly analytics-driven, model-based management of business enterprise more generally. Increasingly a broad variety of firms are adopting business analytics decision models to drive and mange operations, strategic planning, marketing, customer service, and personnel decisions, often deploying advanced technologies and complex decision frameworks. Of concern is that increasingly complex models are being adopted with a poor context and understanding of potential risks trapped in the models.
A mere five years ago, ‘model risk’ was considered an obscure and pedantic topic. One might imagine it formerly debated chiefly by odd, pipe-smoking cabals of Wall Street quants and bitter, ignored professors of epistemology from small, unpronounceable private universities. How karma has swiftly changed us all into ruddy-cheeked enthusiasts of modeling minutia! The spectacular failure of a rogue’s gallery of key financial and macroeconomic models has thrust this topic to center stage. One might easily envision Sergey Brin and Obama pontificating on the topic with Jamie Dimon in Davos. The U.S. Mortgage Bubble, collapse of Lehman Brothers and Bear Sterns (among others), subsequent Global Financial Crisis, and the continuing debate concerning how to structure a staged recovery has been, at its core, a history of ‘models gone wild’: the good, the bad, and the ugly.
The current boisterous debate concerning government debt levels and economic stimulus illustrates quite graphically the pitfalls of model risk: http://www.economist.com/news/finance-and-economics/21578704-mudslinging-between-economists-distraction-real-issues-dismal. Economists Carmen Reinhart and Kenneth Rogoff previously had been lauded by austerity-pushing politicians worldwide. Their research, based on statistical analysis claiming to equate high government debt levels with slowing economic growth, became a rallying cry for pro-Hayek / anti- Keynesian calls for smaller government, spending cuts, and lower public debt levels to climb out of the economic crisis. In democratic governments populated by tax and debt-shy elected officials, this research provided a convenient excuse to slash government programs and investment instead of enacting stimulus measures.
The apparent ineffectiveness and pain inflicted by harsh austerity measures, particularly in the EU, led to a review of the research model. It turned out several errors were revealed: 1) a coding error excluded relevant data from the sample, 2) relevant data was culled from the sample, and 3) a questionable method for weighting historical figures was utilized. http://blogs.lse.ac.uk/impactofsocialsciences/2013/04/24/reinhart-rogoff-revisited-why-we-need-open-data-in-economics/ .
Beyond this, revisiting the Rogoff-Reinhart research has highlighted a common ‘model risk’ factor: the potential for incorrectly conflating demonstrated correlation with causation (i.e. assuming that because two phenomenoncoincide, that one ‘causes’ the other). Their research implied that the seeming correlation between high public debt and slower growth indicated that raising public debt caused slower economic growth. However, it may be that slower economic growth simply often co-occurs frequently with higher public debt as governments borrow in an attempt to stimulate the economy. It does not necessarily follow that rising public debt is the cause of, as much as an accompaniment of, slower growth.
While Rogoff-Reinhart evidenced both technical and conceptual errors, it is the latter type which is typically the more subtle and thus dangerous risk. This is the risk, essentially, of inflicting faulty theory, via assumptions, into the model and/or onto interpretations of model results. As Morini espouses in Understanding and Managing Model Risk: “model assumptions, not computational errors, were the focus of the most common criticisms against quantitative models in the crisis, such as ‘default correlations were too low’ ” (2011).
It is not without due cause that the topic of model risk topic has been thrust from the bowels of intelligentsia boiler rooms to the relative patter of market-watch chat shows. Avoiding alarmism, the stakes are high. Firstly, while some rather poor decisions have been made already on the basis of broken models, more could yet follow. At a fundamental level, quite dramatic errors have been made over the past two-and-a-half decades due to faulty decision making processes which at some level rested upon model-based procedures: the Challenger and Columbia space shuttle disasters, the collapse of Long-Term Capital Management, derivatives-based investment melt-downs, the Dotcom investment bubble, intelligence failures surrounding 9/11, the lead-up to the Iraq War, friendly fire incidents in military theaters, the Hurricane Sandy disaster, and numerous recent trading scandals. Understanding and counteracting analytics model risk seeks to avoid errors of decision making which have outsized consequences.
Secondly, there is an obverse scenario possible: given the increase in analytics model complexity and the possibility of swelling errors, there is the danger of a rejection of the advanced analytics paradigm, of ‘throwing the baby out with the bathwater’. There are powerful new tools and techniques for conducting advanced data analysis. However, it may occur that business leaders ultimately abandon the effort to manage complex decision making models and processes for their ungainliness and propensity for error when not properly validated. A recent post addressed this topic: https://sctr7.com/2012/12/27/decision-management-hitting-natural-human-limits-and-what-to-do-about-it/
Implicit in the concern of rejecting advanced analytics is the notion that a decision model has marginal declining utility as the efforts dedicated to model management (principally design, validation, and implementation) increase as corresponding value decreases (see Figure 1 below). The danger is that organizations deem complex decision making too unmanageable, too costly, and too risky, and therefore retreat to a new ‘Dark Age’ driven by traditional intuition-based management. Such a retrenchment to traditional intuition-driven, top-down management paradigms contains its own danger, as addressed in another recent post: https://sctr7.com/2013/05/19/the-once-and-future-king-is-anglo-saxon-business-culture-its-own-worst-enemy/
Figure 1: Value trade-off in model overhead (Ansoff & Hayes, 1973; Balci, 1998; Sargent, 1996; Shannon, 1975)
Given these risks, the following article will look more closely at issues related to model scoping, after which a working categorization of business analytics model risks will be offered.
End of article 2 of 5
Link to next article in series: business analytics model scoping and complexity (article 3 of 5)
Link to introductory / header article (0 of 5)
REFERENCES
Ansoff, H. I., & Hayes, R. L. (1973). Roles of models in corporate decision making. Paper presented at the Sixth IFORS International Conference on Operational Research, Amsterdam, Netherlands.
Balci, O. (1998). Verification, Validation and Testing: Principles, Methodology, Advances, Applications, and Practice. In J. Banks (Ed.), Handbook of Simulation. New York: John Wiley & Sons.
Derman, E. (1996). Model Risk. Quantitative Strategies Research Notes. Goldman Sachs. http://www.ederman.com/new/docs/gs-model_risk.pdf
Hubbard, Douglas W. (2009). The Failure of Risk Management: Why It’s Broken and How to Fix It. John Wiley and Sons: Kindle Edition.
Morini, Massimo (2011). Understanding and Managing Model Risk: A Practical Guide for Quants, Traders and Validators (The Wiley Finance Series). Wiley: Kindle Edition.
Sargent, R. G. (1996). Verifying and Validating Simulation Models. Paper presented at the 1996 Winter Simulation Conference, Piscataway, NJ.
Shannon, R. E. (1975). Systems Simulation: The Art and Science. Englewood Cliffs, NJ: Prentice-Hall.
Trackbacks/Pingbacks
[…] Following from article 2 of 5 on Business Analytics Model Risk […]
[…] Framing the business analytics model risk problem […]
[…] Link to next article in series: framing the business analytics model risk problem (article 2 of 5) […]