Financial Crises: Who Can Predict Them and How

, by Carlo Favero - Deutsche Bank Chair in Asset Pricing and Quantitative Finance
The answer is in online machine learning that, by combining huge amounts of data and models, is able to produce optimal predictions where collective imagination has failed. At least in the past. For the future, the challenge remains open

During a visit to the London School of Economics at the peak of the 2008 financial crisis, when researchers from one of the most prestigious departments in the world explained to her how terrible the situation was, the Queen of England asked directly, "Why did nobody notice it?"

Several months after this question was asked, the Economics section of the British Academy wrote a three-page letter addressed to Her Majesty, admitting that the failure to predict such a serious crisis was attributable to: "The failure of the collective imagination of many bright people."

This summarizes the fact that predictive models, based on theory and applied to data by estimating a small number of parameters, are simplified versions of economic reality. Models are validated by guaranteeing that they are able to explain past fluctuations in the economy. By their nature, they get into trouble when they omit aspects of reality that are not significant in the historical period used to calculate the model, but which acquire relevancy during a period different from the past, such as the crisis.

The lack of collective imagination refers to models not including aspects that have proved to be very important, such as the real estate mortgage market and their securitization due to the subprime crisis, or the importance of public debt securities in banks' financial statements in the case of the European debt crisis.

Furthermore, each forecaster typically uses a unique model that reflects his or her theoretical positions and strategies for identifying relevant parameters to describe the dynamics of the economy.

A computer has no imagination but can use data to optimally combine a large number of predictions based on different models, with weights that vary over time. Combining a large number of forecasts minimizes the probability of omitting important aspects of reality from the information set used for forecasting.

An online machine learning system allows you to optimally combine forecasts. Given a sequence of periods in which forecasts based on different models are available at any moment in time, the forecaster receives the information, combines it to make a forecast, and takes a loss that is a function of the forecasting error. The algorithm chooses the weight to be given to each forecast at time t on the basis of the difference between the actual loss and the minimum possible loss using the information set available in all observable periods before time t (minimal regret principle).

In this way, optimal forecasts can be built starting with a large number of crisis indicators (which can be macroeconomic, credit, interest rate, real estate market or financial market indicators) and various models that use these indicators differently. The result is the selection of optimal weights attributed to the different models in different periods over time. Combined models that give optimal forecasts in periods of calm are very different from models that give optimal forecasts in periods of crisis. It appears that the use of online machine learning would have predicted the crisis, with a low probability of producing a false alarm.

It would therefore seem that computational capability can make up for a lack of collective imagination. Only seemingly, though, because the test used concerns the ability to predict crises that have already occurred. An important verification of the new approach will evaluate its ability to predict future crises, regarding which data do not yet exist.

Read more about this topic:
The Ratio of Uncertainty
The Lunar Algorithm That Predicts the Future
The Controllers That Report the Risk of a Crisis
How Companies Overcome a Crisis