2,000 Computers Are Not Enough for an Accurate Forecast
From climatology to finance, modern decision-making uses as information levers various quantitative models built to support the solution of the problem at hand. Through the model, the scientists, engineers and managers are able to focus on the main aspects of the problem, and are helped to choose the best solution among available alternatives.
Undoubtedly a good model is an indispensable forecasting tool. Hydrological and meteorological models are good examples. Those available to the Lombardy Region would have predicted in time the flooding of the Seveso river, which occurred a few months ago. If their results had been looked at, several million euros in damages would have been avoided. However, no existing model would have been able to forecast the earthquake and the ensuing tsunami hitting Fukushima. And neither the Deepwater Horizon oil spill in the Gulf of Mexico nor the bankruptcy of Lehman Brothers were forecasted, although maybe in these two latter cases it was because no models had been designed to address such occurrences.
The lack of a model or its inadequacy lead to the same consequence: bad or partial information. In fact, there is model risk, i.e. the risk of a suboptimal decision due to an incomplete assessment of the uncertainty linked to numerical forecasting. A recent debate within the climatic change community underscores the necessity of developing, next to forecasting models, an analysis of uncertainty in order not to fall into the trap of overconfidence about forecasting results. The EPA recommends the use of techniques addressing uncertainty and the sensitivity of results, so that confidence levels are transparent to policymakers. The issue is even more complex when rare and catastrophic events, such as the consequences of the hurricane Katrina are to be forecasted. In that case, numerical models face daunting challenges.
When the event to be predicted has low probability of occurring, one must force Monte Carlo (a non-parametric statistical method) simulations in the right direction. The techniques to satisfy such aims are costly from a computational point of view. Etienne de Rocquigny, Deputy Vice Rector of Research at the Ecole Centrale of Paris and member of the steering committee Eleusi Bocconi research center, explained in a recent seminar that Electricité de France, one of Europe's biggest energy utilities, has 1,800 computers running in parallel in order to provide projections and forecasts of environmental risk. But even such a huge calculation effort could turn out to be not enough. By learning from the overestimation and underestimation errors of a model, one learns to build a better one. And steps ahead in this direction have been made. A classical example is the higher accuracy of current meteorological methods with respect to those existing only fifteen years ago.
However, in order to build even more effective numerical models, a detailed and realistic analysis must be made of the aspects which led to the error in the previous version. For instance, intuitively after Fukushima people have concluded that critical components must be built higher up with respect to sea level, so that waves can't reach them. But a recent MIT report shows the results of a Korean analysis showing that by building critical components higher with respect to the surface their resistance to seismic shocks is reduced. Thus in South Korea, such components are built below ground, a choice that would seem counterintuitive.
The use of a model must thus be accompanied by a critical sense, both when building the model and when using its informational output, which has become part of everyday decision-making: who doesn't watch the weather forecast before planning for the weekend?