How Do Social Media and AI Shape What We Believe?
When was the last time you changed your opinion about something significant, such as a political stance, a social mindset, or a belief about how the world functions? It’s likely that you will not be able to pinpoint the exact moment, just that it happened gradually and without ceremony over the course of days or weeks of consuming stuff that seemed to be your own free decision. The sensation of changing one’s beliefs without being aware of the mechanism is currently the focus of significant empirical research, and the results are disturbing. Research published in Science in late 2025 by an intercollegiate team co-led by Northeastern University, a single week of exposure to a social media algorithm can influence partisan political sentiments by an amount that would typically take three years of natural societal change. What you see next is determined moment by moment by the recommendation engine, an unseen curator.
The construction of beliefs is changing, and this change is taking place at the architectural level of the platforms that currently mediate most of the human communication. Instead of just reflecting what people think, social media algorithms and their artificial intelligence offspring actively mould it on an industrial scale and with a financial incentive to keep the process going forever. One of the key intellectual and political issues of the current decade is comprehending this machinery, including how it operates, what it does to people and society, and if it can be meaningfully limited.
A recommendation engine’s basic architecture is surprisingly straightforward: it looks at what a user interacts with, determines what will keep them interested for the longest, and then displays that information. The feedback loop and scale are where the sophistication is found. Every day, billions of interactions are processed by modern platforms, which feed the data into models that understand personal preferences at a level of detail that no human editor could match. YouTube’s recommendation system, studied in a March 2026 analysis of content policing, recommendation, and monetization, operates on logic that is not designed to inform or enlighten; it is made to maximise watch time.
The political aspect of this apparatus has been more precisely documented. What happens when consumers transition from a chronological feed to X’s algorithmic feed was investigated in a large-scale experiment that was published in Nature in 2026. The findings demonstrated how the algorithmic feed consistently favours conservative content and modifies political beliefs. CEPR’s VoxEU analysis refined it further: by drawing users toward specific ideological stances through repetition and selective amplification, the algorithm influences political opinions rather than necessarily increasing total polarisation. The platform does not have to create radicalism from the ground up. It just needs to keep bringing up the information that gets the greatest interaction, which in political contexts is typically the most certain, emotionally charged, and us-versus-them. It is not interesting to be uncertain. There is no trend in complexity. Although the algorithm lacks ideology, its incentive structure consistently generates ideological effects.
What scientists have come to refer to as the filter bubble and echo chamber dynamic underlies this process. A decade-long systematic review of thirty studies published by MDPI in 2025 confirmed that algorithmic curation consistently produces information environments where users encounter fewer viewpoints over time, where the apparent consensus within a user’s feed increasingly deviates from the actual distribution of opinion in the wider world, and where prior beliefs are reinforced rather than challenged. This is not a psychological flaw that just affects naive or illiterate users; it is a structural characteristic of systems built to balance supply and demand. Giving individuals more of what they have previously shown they enjoy is monetising their preexisting prejudices rather than advancing their epistemic goals. A 2026 Sage Journals study, AI’s predictive........
