Business analytics model risk (part 0 of 5): framing model risk – the complexity genie and the challenge of deciding on decision models

Model Risk

Business analytics model risk (part 0 of 5): framing model risk – the complexity genie and the challenge of deciding on decision models

Introduction to a series of five articles on model risk

Here we introduce a series of five articles seeking to frame, define, and categorize business analytics model risk.  The intention is to propose processes and practices for strengthening organizational decision model risk mitigation. The series of five articles treat the following structured topics in sequence:

  1. Validation of business analytics models
  2. Framing the business analytics model risk problem
  3. Business analytics model scoping and complexity
  4. Categorizing business analytics model risk
  5. Practical business analytics model risk mitigation (pending)
Model Risk

Model Risk

The topic of model risk has rapidly come to the fore as a central concern for large organizations.  Growing complexity in business decision making and an increasing reliance on IT-based decision systems, many of which become ‘black boxes’, has raised the stakes concerning model risk.  This topic has been of particular concern in the finance and banking industries as poor models have been centrally identified as a factor in the U.S. Mortgage Crisis and subsequent Global Financial Crisis. The still unwinding Global Financial Crisis has graphically demonstrated the serious repercussions of ‘bad’ (i.e. incomplete, faulty, or misleading) business decision making models.

As broader industries and organizations, beyond banking and finance, are rapidly adopting complex model-based decision making methods, we are concerned with model risk more generally.  In particular, the growth of ‘business analytics’ and ‘Big Data’ as structured approaches to complex business decision making has raised the stakes for improving decision model quality. Complex decision models often become ‘baked into’ systems, whereby a subsequent overreliance can cause spiraling errors.  ‘Bad’ models are quickly ‘hidden’ or subsumed inside complex systems and procedures in modern large enterprise.

Model risk is here specified as ‘business analytics (BA) model risk’ to distinguish it from financial model risk (market and economic decision and risk models specific to and oriented towards finance and banking industry applications), otherwise the dominant current discourse.  This recognizes that much of the literature output is focused on model risk for the finance industry, but that the scope of the model risk problem is larger and broader (across all industries) and thus deserves a more general discussion and treatment.

Thus, when speaking of model risk, we are referring to organizational decision making in large, complex organizations generally.  Although outside a particular industry, organizational decision models often do come down to financial risk (being the near-universal measure for organizational health and performance).  Also, although decision model implementation may be purely organizational, that is, not associated with IT systems specifically, we are more particularly concerned with decision models as encoded into IT systems:  business intelligence (BI), decision support systems (DSS), manufacturing control systems, predictive machine learning, etc.

In particular we are concerned here with highly complex ‘analytics’ decision models which become encoded in IT systems (algorithmically or otherwise in terms of automated computational data processing and procedures). This recognizes that large, complex organizational decision making is increasingly automated by IT systems which encode and embed decision models.  The term ‘black box’ refers to the tendency for such systems to trap and hide potentially risky assumptions with models.

Organizational decision making is a topic which is difficult to discretize.  There are many modes and methods for decision making in large organizations.  In particular, some champion the role of intuition versus process-focused decision making.  Kanheman and Klein have addressed this topic by specifying conditions where intuition-based decision making can be useful in their article ‘Conditions for intuitive expertise: a failure to disagree’.  They stipulate that “evaluating the likely quality of an intuitive judgment requires an assessment of the predictability of the environment in which the judgment is made and of the individual’s opportunity to learn the regularities of that environment. Subjective experience is not a reliable indicator of judgment accuracy.”

Kahneman and Klein assert that intuition is valuable in very specific venues: environments where experience trumps available data.  A linked implication is that such venues are rapidly disappearing: the growing availability of data combined with swelling business complexity creates environments where intuition is a poor alternative to structured data-focused insight.  Growing business complexity in particular increasingly reduces the type of venues where intuition is a preferable decision modality.

Multi-venue complexity is increasingly the status quo for large institutions.  Business complexity, among others, entails combination and permutations of:

  • interlinked and extended supply chains (i.e. component ingredients and commodities sourced from multiple 3rd parties and providers);
  • interconnected global financial/funding infrastructure (i.e. economic interdependency of debt and capital providers);
  • trans-national regulatory venues (i.e. tax and incentive regimes);
  • consumer/market complexity (i.e. broadened consumer choice and global market competition);
  • labor outsourcing (i.e. offshore task componentization);
  • and information overload (i.e. availability of immense datasets).

We arrive thus at a situation, promulgated by globalization and technological development, where it is difficult to ‘put the complexity genie back in the bottle’.  There is a temptation to retreat to intuition, yet intuition itself is increasingly ineffective given the complex of factors which transcend the ability of individuals to make sound decisions.  We must progress in the effort to make better decisions in inherently complex environments, yet the decision methods of the past are no longer adequate to the challenge ahead.

The theme of this series thus becomes: we are faced with increasingly difficult business decisions which can only be attacked with structured decision approaches, particularly those which combine large dataset analysis with computational approaches.  This sentiment has been roughly popularized as ‘Big Data’:  the structured practice of ‘business analytics’ in attacking large, complex datasets.  However, applying structured decision making itself requires decision making via models.  The problem is thus ‘deciding upon decision models’.  The growing challenge for large enterprise is to specify robust methods for designing, validating, and implementing robust ‘analytical’ decision models in order to countenance the ‘complexity genie’.

Undergirding this assessment of BA model risk are two key, and troubling, assertions: 1) models are by nature ‘wrong’, and 2) no model can be comprehensively ‘proven to be right’ (validated).  Quoting George Box, “essentially, all models are wrong, but some are useful”.  In addition to being ‘wrong’ at some level, we can never fully demonstrate model ‘wrongness’, formal validation (i.e. resolute scientific falsification) being methodologically and epistemologically impossible (Balci, 1998; Pidd, 2004).  The implied objective is to determine where models are ‘useful enough’ while understanding and managing their inherent ‘wrongness’ (limitations implied by and inherent to their boundary conditions as willful abstractions).

These are important assertions, but perhaps not immediately intuitive, and thus shall be ‘unwrapped’ carefully in this series.  The resulting main assertions, and central problems, concerning business decision models that will be explored and treated in this series are:

  1. All models, being abstractions of reality, are essentially ‘wrong’ at some level;
  2. As abstractions, models can never be demonstrated as being ‘scientifically true’ (i.e. fully falsifiable):  it is not possible to fully validate a complex business model;
  3. There is a declining business utility to the management of complex models, implying that there is a pressure in the commercial sphere to achieve simple ‘good enough’ models;
  4. Designing and validating commercial models is, in the end, an exercise in organizational confidence building (establishing comprehensive model ‘rightness’ or ‘wrongness’ being both practically and methodologically impossible);
  5. Commercial enterprise is particularly susceptible to the twin influences of agency factors (i.e. power politics) and behavioral factors (i.e. decision biases which emerge in complex environments, particularly where time pressure and a lack of robust information is resident); and
  6. Robustness in designing business decision models (deciding about decision models) ultimately comes down to engineering better organizational practices and processes to ‘weed out’ the inherent tendencies of groups and individuals to sabotage ‘best practices’ in organizational decision making (both overt/explicit and covert/tacit).

The following article begins with a detailed exploration of the impossibility of comprehensive model validation.  This presents a challenge for business analytics practitioners:  how do we establish ‘usefulness’ or general, practical ‘good enough-ness’, which is otherwise the objective of this series.  How do we best decide on our decision models, given that models are ‘wrong’ and cannot be proven comprehensively?  This core challenge specifies the base conditions for accommodation: understanding and admitting the problem fully is the first step towards addressing it in a practical sense.

End of introduction to a series of five articles on model risk

LINK TO NEXT ARTICLE IN SERIES ON MODEL RISK (1 of 5): When is a business analytics model ‘validated’?


Ansoff, H. I., & Hayes, R. L. (1973). Roles of models in corporate decision making. Paper presented at the Sixth IFORS International Conference on Operational Research, Amsterdam, Netherlands.

Balci, O. (1998). Verification, Validation and Testing: Principles, Methodology, Advances, Applications, and Practice. In J. Banks (Ed.), Handbook of Simulation. New York: John Wiley & Sons.

Derman, E. (1996). Model Risk. Quantitative Strategies Research Notes. Goldman Sachs.

Hubbard, Douglas W. (2009). The Failure of Risk Management: Why It’s Broken and How to Fix It. John Wiley and Sons: Kindle Edition.

Kahneman, D. (2011). Thinking, Fast and Slow. New York: Farrar, Straus and Giroux.

Kahneman, D., & Klein, G. (2009). Conditions for Intuitive Expertise. American Psychologist, 64(6), 11.

Morini, Massimo (2011). Understanding and Managing Model Risk: A Practical Guide for Quants, Traders and Validators (The Wiley Finance Series). Wiley: Kindle Edition.

Pidd, M. (2004). Computer Simulation in Management Science. New Jersey: John Wiley & Sons, Ltd.

Sargent, R. G. (1996). Verifying and Validating Simulation Models. Paper presented at the 1996 Winter Simulation Conference, Piscataway, NJ.

Shannon, R. E. (1975). Systems Simulation: The Art and Science. Englewood Cliffs, NJ: Prentice-Hall.

, , , , , , , , , , , , , , , ,

About SARK7

Scott Allen Mongeau (@SARK7), an INFORMS Certified Analytics Professional (CAP), is a researcher, lecturer, and consulting Data Scientist. Scott has over 30 years of project-focused experience in data analytics across a range of industries, including IT, biotech, pharma, materials, insurance, law enforcement, financial services, and start-ups. Scott is a part-time lecturer and PhD (abd) researcher at Nyenrode Business University on the topic of data science. He holds a Global Executive MBA (OneMBA) and Masters in Financial Management from Erasmus Rotterdam School of Management (RSM). He has a Certificate in Finance from University of California at Berkeley Extension, a MA in Communication from the University of Texas at Austin, and a Graduate Degree (GD) in Applied Information Systems Management from the Royal Melbourne Institute of Technology (RMIT). He holds a BPhil from Miami University of Ohio. Having lived and worked in a number of countries, Scott is a dual American and Dutch citizen. He may be contacted at: LinkedIn: Twitter: @sark7 Blog: Web: All posts are copyright © 2020 SARK7 All external materials utilized imply no ownership rights and are presented purely for educational purposes.

View all posts by SARK7


Subscribe to our RSS feed and social profiles to receive updates.


  1. Business analytics model risk (part 1 of 5): when is a business analytics model ‘validated’? | BAM! Business Analytics Management... - June 13, 2013

    […] Link to introductory / header article (0 of 5) […]

  2. Business analytics model risk (part 2 of 5): saving the kingdom, one nail at a time… | BAM! Business Analytics Management... - June 13, 2013

    […] Link to introductory header article (0 of 5) […]

  3. Business analytics model risk (part 3 of 5): model scoping and complexity | BAM! Business Analytics Management... - June 13, 2013

    […] Link to introductory header article (0 of 5) […]

  4. Business analytics model risk (part 4 of 5): categorizing model risk | BAM! Business Analytics Management... - June 13, 2013

    […] Link to introductory header article (0 of 5) […]

  5. Analytics and Belief: The Struggle for Truth | BAM! Business Analytics Management... - September 8, 2013

    […] process.  This state reflects the observation of statistician George Box that “all models are wrong, but some are […]

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: