Contacts
Pierpaolo Battigalli and Emiliano Catonini have solved a longstanding problem

Imagine trying to design a set of rules that always leads to good outcomes — whether it’s how we run auctions for public resources, award research funding, or decide on big community projects. The challenge sounds simple, but economists have long known it is anything but. People hold private information, make guesses about what others know, and react to signals in complex ways. Designing rules that work reliably in this environment — in every scenario, no matter what people believe about each other — has been considered extremely difficult.

According to a new study by Pierpaolo Battigalli (Bocconi Department of Decision Sciences and IGIER) and Emiliano Catonini (NYU Shanghai), forthcoming on Econometrica, this pessimistic view is wrong. Under the right kind of interaction and reasoning, far more is achievable than previously thought.

Uncertainty makes good rules fragile

Economists use the term mechanism to describe any rule or procedure that shapes how people can make strategic decisions together — think of auctions, voting systems, bargaining protocols. For decades, researchers have asked whether it is possible to design mechanisms that work robustly, even when people’s beliefs about each other vary.

Traditional theory said that only a very narrow set of rules meets this standard. Most mechanisms fail as soon as people start imagining different scenarios, forming alternative beliefs, or making unpredictable assumptions about what others know. This fragility has been a major obstacle for real-world policy design.

Understanding interactions

Battigalli and Catonini’s work shows that the problem is not insoluble — it’s simply been approached with the wrong tools.

Instead of static, one-shot rules, the new research studies dynamic mechanisms: step-by-step interactions where people observe each other’s moves and adjust their beliefs accordingly. Here, a powerful form of strategic reasoning kicks in: forward induction.

Forward induction is a way of reasoning in strategic situations where people interpret others’ actions by assuming they are purposeful and rational — even when those actions look surprising at first. In other words, players put themselves in the co-players’ shoes, look ahead, imagine how the interaction could unfold, and use this to reinterpret earlier choices in a clearer, more meaningful way. It’s a sophisticated way of thinking, closer to how real people reason in ongoing interactions.

Smarter reasoning creates stability

Battigalli and Catonini prove a result that had long seemed out of reach: When people use forward-induction reasoning in dynamic situations, the set of plausible outcomes behaves predictably and remains stable even as beliefs vary.

In technical terms, these outcomes are monotonic with respect to uncertainty. In everyday language: If a mechanism works under one belief scenario, it will continue to work under all possible belief scenarios. Monotonicity means that when you narrow down the range of possibilities you consider, the set of reasonable outcomes can only stay the same or become smaller — never bigger. It tells you that your conclusions become more precise (or remain unchanged) as you learn more, but they don’t unexpectedly expand or flip.

This means the mechanism meets the definition of robust, since a mechanism is “robust” if it works reliably across all possible ways people might imagine, guess, or speculate about one another’s private information.

More ambitious and reliable social choices

The study shows that many desirable social-outcome rules — rules previously thought unimplementable — actually become achievable when we allow interaction over time and assume intelligent strategic reasoning.

This has major implications:

  • Markets and auctions can be designed to allocate goods more fairly and efficiently.
  • Public decision-making can rely on mechanisms that remain sound even when people’s beliefs differ.
  • Policy tools can be built with greater confidence that they will hold up in the unpredictable real world.

In short, the research broadens what society can reliably accomplish, at least in theory, providing institutions use information efficiently.

Battigalli and Catonini’s contribution continues a long tradition in game theory: grounding better institutions in a deeper understanding of human reasoning. But it also pushes that tradition forward, showing that robustness and sophistication can go hand in hand.

PIERPAOLO BATTIGALLI

Bocconi University
Department of Decision Sciences