The Algorithm That Optimizes Real-Time Decisions
In the world of industrial automation, network management and real-time decision-making systems, artificial intelligence and machine learning technologies are increasingly used to anticipate events and suggest optimal actions. The problem? Sometimes we have several predictive models available, each tailored to different scenarios. But querying all of them, all the time, is too costly — in terms of computation time, energy or operational complexity.
So how do we decide which model to consult and whose advice to follow, if we can’t know in advance which will perform best in each situation? That’s the question tackled by a recent study by Marek Eliáš, Assistant Professor in the Bocconi Department of Computing Sciences, and Matei Gabriel Coșa, a student in the Master of Science in Artificial Intelligence at Bocconi.
Their work addresses a classic problem in theoretical computer science — Metrical Task Systems (MTS) — but from a new perspective, one that is much closer to the practical challenges posed by AI in complex systems.
A Common Challenge: Deciding Without Knowing Everything
Imagine an automated factory that must decide, every second, how to configure its machines to minimize costs and production times. It has several predictive models to choose from: one excels under stable conditions, another handles sudden demand shifts well and a third is specialized in energy efficiency. However, each of these models is complex to compute, and using all of them simultaneously at every moment is just not feasible. This is a concrete example of what the authors call “bandit access to multiple predictors:” we can query only one model at each step. Yet we still want to make decisions that are almost as good as if we had consulted the best model every time.
The Challenge: Learning to Choose the Right Advisor
Eliáš and Coșa’s study proposes a new algorithm capable of learning, over time, which model to follow — even without full access to all information. “The mechanism is similar to that of a slot machine,” explains Eliáš. “Each model is like a lever. Pulling it — i.e. querying that model — gives us a certain result. But unlike a casino, the ‘reward’ here isn’t immediately clear, and it may depend on what we did in the previous steps.” To make the problem more realistic, the authors also assume that understanding a model’s behavior may require querying it multiple times in a row — like an expert who needs a few moments to study the context before offering a sound opinion.
The Result: Less Waste, More Efficiency
The algorithm they developed can asymptotically match the performance of the best available model — without knowing in advance which one that is. “In practical terms,” Eliáš notes, “this means an automated system could adapt on its own to choose the most suitable model for each situation, minimizing both computational costs and decision errors.” The key strength of the result lies in its robustness: even if only one of the models is actually good, the algorithm learns to follow it. And in cases where the models are all imperfect but complementary — each one performing well in different subsets of situations — the algorithm adapts accordingly, switching “advisors” when needed.
Why It Matters
This line of research is especially relevant today, when intelligent systems are increasingly “augmented” by predictive models, but where efficient management of information is crucial. Application areas include:
- Network traffic management: deciding how to route data packets by choosing among traffic prediction models.
- Caching and memory systems: dynamically selecting which data to keep in memory, using models that estimate reuse likelihood.
- Energy optimization in data centers or smart buildings, where various models try to predict consumption, loads or user behavior.
In all these cases, being able to consult just one model at a time while still achieving near-optimal results is a strategic advantage: it cuts costs, increases efficiency and maintains high performance.
Theory That Serves the Real World
Although the study is theoretical — with mathematical proofs and formal performance bounds — its implications are far from abstract. “As is often the case,” concludes Eliáš, “formalizing a problem is what opens the door to solutions that are generalizable and applicable across very different contexts.” This research shows how cutting-edge work in algorithms and machine learning can help build smarter, more adaptive and more efficient decision-making systems — right where they’re most needed: in factories, servers and control systems that keep the digital and physical worlds running.