Why Measuring Corporate Power Is Harder Than We Thought
In recent years, one economic question has moved from academic journals to political speeches and boardrooms alike: are companies becoming too powerful? From rising prices to falling labor shares, economists increasingly point to markups—the gap between prices and costs—as a key force shaping modern economies.
But here’s the problem: markups are notoriously hard to measure. And most of the data economists rely on—firms’ financial statements—don’t even include the key ingredients needed to calculate them properly.
A new paper (“The Hitchhiker’s Guide to Markup Estimation: Assessing Estimates from Financial Data”) published in Econometrica takes on this challenge head-on. The study by Basile Grassi (Department of Economics, Bocconi University; IGIER; OFCE; CEPR), alongside Maarten De Ridder (LSE) and Giovanni Morzenti (Analysis Group) looks at how to determine what we can—and cannot—learn about markups using only financial data.
The data dilemma behind market power
At the heart of the issue lies a simple constraint: economists rarely observe prices and quantities directly. Instead, they rely on financial statements—revenues, costs, and balance sheets—to infer how firms behave. As the authors explain, this creates a fundamental blind spot:
“Firm-level data with wide coverage… primarily come from financial statements, which lack information on prices.”
Yet these same data have powered a vast literature linking markups to inequality, inflation, trade, and productivity. The question is unavoidable: are those estimates reliable?
The key insight: levels vs. trends
The paper overturns a common belief that financial data are largely useless for measuring market power. Instead, the authors draw a crucial distinction:
- Bad news: Financial data struggle to measure the level of markups
- Good news: They can still reliably capture changes over time and differences across firms
As they put it:
“Measuring the average level of the markup does require pricing data.”
But, at the same time, variation in markups—in the cross-section and over time—is also well-estimated.
This distinction matters enormously. It suggests that while headline claims like “markups have doubled” may be fragile, studies of inequality, competition, or firm heterogeneity remain on solid ground.
Why the bias happens
The problem stems from a technical but intuitive issue: revenue is not the same as output. When firms set prices strategically, higher output often comes with lower prices. That breaks the link between revenue and production, introducing bias. In fact, the authors show that under certain assumptions, using revenue can produce completely misleading averages:
“The average revenue-based markup is not informative about the true average.”
However, the same bias affects all firms in similar ways, which is why relative comparisons still work.
Simulations and real-world data: a rare double test
To move beyond theory, the researchers combine:
- Monte Carlo simulations of a macroeconomic model
- Administrative data from French manufacturing firms, including both prices and quantities
The results show that even without price data, estimated markups remain highly informative:
- Correlation between “true” and estimated markups in simulations: 0.94
- In real data, correlations remain strong—especially for changes over time
Multiple implications
The implications ripple across economics:
- Macroeconomics: Trends in market power can still be tracked using existing data
- Competition policy: Cross-firm comparisons remain meaningful
- Empirical research: Past studies may be more robust than critics feared
But there’s also a warning. When it comes to aggregate levels—often used to calibrate economic models—the authors are unequivocal:
“Revenue-based markups should not be used… to discipline parameters that govern aggregate markups.”
In other words, the big picture is trustworthy but the exact numbers may not be.
The study ultimately reframes the debate. Instead of asking whether financial data are “good” or “bad,” it shows that their usefulness depends on the question.
- Want to know if markups are rising? → You’re probably fine
- Want to know their exact level? → You need better data