Does X’s Algorithm Push Politics to the Right?
In the age of social media, the most powerful political actor in the room may not be a politician at all. It may be an algorithm. For years, critics have warned that social media feed algorithms amplify outrage, fuel polarization and distort democratic debate. Yet hard evidence has been elusive. A landmark study conducted with Meta during the 2020 US election found that turning off Facebook’s algorithm had no measurable effect on users’ political attitudes. But then what happens when the algorithm is on?
In “The political effects of X’s feed algorithm”, recently published in Nature, Germain Gauthier (Department of Social and Political Sciences, Bocconi University), Roland Hodler (University of St.Gallen, Switzerland), Philine Widmer and Ekaterina Zhuravskaya (both of Paris School of Economics) report the results of a large-scale field experiment conducted in 2023 on Elon Musk’s platform X, formerly Twitter. Their conclusion: activating X’s algorithm does shift political attitudes systematically, and toward more conservative positions.
A rare real-world experiment
The research team recruited nearly 5,000 active US-based X users and randomly assigned them to use either the algorithmic “For You” feed or the chronological “Following” feed for seven weeks. Unlike previous studies conducted in cooperation with platforms, this experiment was carried out independently. Participants were paid to remain on their assigned feed, and researchers measured not only survey responses but also the actual content users saw and the accounts they followed.
The context is crucial. Social media platforms are now a primary source of news for millions. As the authors note, “a quarter of US adults report social media as their primary news source.” If feed algorithms subtly privilege certain types of content, the political consequences could be very significant.
What happened when the algorithm was switched on?
Switching from a chronological feed to the algorithmic feed significantly increased user engagement. But it also shifted political attitudes in a conservative direction.
After just seven weeks, users exposed to the algorithm were:
- More likely to prioritize Republican-aligned policy issues such as inflation, immigration and crime.
- More likely to view investigations into Donald Trump as unacceptable.
- More likely to express pro-Kremlin attitudes regarding the war in Ukraine.
Importantly, these effects did not extend to partisan identity or affective polarization. Users did not become more likely to call themselves Republicans or to dislike Democrats more intensely. Instead, the shifts occurred in views on specific policies and current events.
And when the researchers flipped the switch the other way, turning the algorithm off for users who had previously used it, there were no comparable political effects.
Why the asymmetry?
By analyzing over 260,000 posts shown to participants under both feed settings, the researchers identified systematic differences in content exposure. The algorithm promoted highly engaging posts, but also disproportionately amplified conservative content and demoted posts from traditional news media.
More striking still, exposure to algorithmically curated content led users to follow conservative political activist accounts. Once those accounts were followed, their posts continued to appear, even after the algorithm was turned off.
This mechanism helps explain the asymmetry: the initial exposure changes who users follow. Those new followings then reshape the content environment persistently. As the researchers conclude, “exposure to algorithmically curated content led users to follow conservative activist accounts… indicating that exposure to feed algorithms has a lasting impact.” In other words, the algorithm doesn’t just sort information, it influences the network structure itself.
Not all algorithms are equal
The study also challenges the assumption that all social media platforms behave similarly. The authors caution that effects are context-specific. X in 2023 was already a highly politicized environment, and ownership changes under Elon Musk may have influenced content dynamics. Still, the findings are difficult to dismiss. Over seven weeks, exposure to the algorithm shifted attitudes by roughly 0.1 standard deviations on several political measures, small but meaningful changes. And crucially, these changes occurred without increasing overt polarization. The platform subtly nudged users’ issue positions without altering their partisan labels.
A subtle shift with big consequences
The most unsettling insight of the study is not that X’s algorithm favors certain types of content. It is that the shift happens quietly: there were no dramatic swings in party identification. No explosion of partisan hatred. No obvious radicalization. Instead, over the course of seven ordinary weeks, users nudged their policy priorities, reconsidered ongoing political investigations, and adjusted their views on international conflict without necessarily realizing why.
By amplifying conservative political content and encouraging users to follow activist accounts, it reshaped the information ecosystem around them. And once those new networks were formed, the influence persisted even when the algorithm was switched off.
In the battle over democracy and digital platforms, this may be the most important lesson. The power of social media algorithms lies less in dramatic polarization and more in gradual normalization. As debates over platform regulation, transparency of algorithms and political influence intensify in the United States and beyond, this research raises a pressing question: if seven weeks can shift opinions, what can happen over a few years?