When Models Are Wrong
Decision makers — in economics, politics and finance — always take decisions under uncertainty. But there are different kinds of uncertainty. It's one thing not to know how inflation will fare in the coming months, and another not to know whether the models we're using to forecast the variable make any sense.
Consider a central banker who must calibrate interest rates: he will use macroeconomic models to estimate the effects of his decisions. But those models — with all their equations, assumptions and scenarios — are simplified and partial versions of reality. They may be "useful," but they are inevitably wrong. Or again: a government must decide whether and how to tax CO₂ emissions. It relies on climate scenarios, constructed by different scientific communities, using different models. But who can say whether these models truly capture the most relevant dynamics? And yet, decisions always need to be made.
But what happens, and how should we proceed, when the models we have not only offer probabilistic and non-deterministic predictions, but are perhaps also misspecified — that is, they don’t include all the variables needed to represent reality with sufficient fidelity?
This question is addressed in the study "Making Decisions Under Model Misspecification," published in Review of Economic Studies, by Simone Cerreia-Vioglio (Bocconi University and IGIER), Lars Peter Hansen (University of Chicago, 2013 Nobel Prize in Economics), Fabio Maccheroni and Massimo Marinacci (Bocconi University and IGIER).
"Many approaches in decision theory address model uncertainty by assuming that among alternative models considered, there is a true one that explains reality," explains Cerreia-Vioglio, Full Professor in the Bocconi Department of Decision Sciences. "But in practice, we know that, very likely, none of the available models is the true model. They are all approximations. Our work attempts to build a more rigorous and prudent way to make decisions in such context."
A safety belt against flawed models
The study proposes a new decision criterion that takes into account not only the variety of available models — as "ambiguity-averse" approaches already do — but also the possibility that all of them are systematically flawed. To do so, it distinguishes between:
- structured models, i.e. those on which the decision maker has built hypotheses based on data, theories and assumptions (for example, a climate model or a financial model with uncertain parameters);
- unstructured models, constructed solely as "stress tests" to assess the consequences should the structured models fail.
The idea is to introduce a penalty that increases as the "statistical distance" between a plausible model and the most extreme ones increases, used as a form of protection against specification errors.
"These unstructured models aren't credible in the usual sense: we wouldn't use them to forecast GDP or interest rates," explains Cerreia-Vioglio, "but they help us understand how sensitive a decision is to errors hidden in structured models. In a sense, they act as a safety belt."
A concrete example: investing in an uncertain economy
In the paper, the authors present an example from macroeconomics: an investor must choose an intertemporal consumption and investment plan, but the economy follows an uncertain technology path, represented by an unknown parameter. The different values of this parameter correspond to the structured models. The decision maker, however, isn't confident that the true parameter is actually among those included.
Applying the proposed criterion, the investor evaluates the possible options by taking into account not only the average return predicted by the "known" models, but also the possibility that they are poorly calibrated — and penalizes plans that would prove very costly if the models were actually incorrect.
A solid theoretical framework with practical implications
The work is based on a rigorous axiomatic framework which distinguishes between two levels of preferences:
- a mental and incomplete preference, which reflects the decision maker's genuine doubts and preferences;
- a behavioral preference, which is what the decision maker would do if they were forced to choose.
This dual level allows us to model the fear that the models are wrong, separating it from the uncertainty between having to choose between alternative models. The result is a new class of decision criteria that generalize classical ones (such as expected utility and Waldean max-min) and can also incorporate elements of Bayesian learning.
And now?
Possible applications range from climate policy to economics and finance. Whenever models are used — and there is a fear that they are wrong — this approach can be applied: it can offer theoretical support for better decision-making.
"We can't avoid using the models we know," concludes Cerreia-Vioglio. "But we can make better use of the uncertainty surrounding them. Our goal is to provide tools for more informed decision-making, knowing that reality always outplays modeling to some extent."
Inside Deciding Minds
From economic theories to cognitive psychology, to the impact of voice interfaces, Bocconi research investigates how choices are made in a world where complexity constantly challenges our decision-making criteria