Guarding the guardians

Guarding the guardians

Good institutions are social technologies that scale trust from personal relations to entire nations. How do they work?

by Julien Lie-Panis + BIO

Traffic police in Rome, 1981. Photo by Richard Kalvar/Magnum Photos

is a game theorist, with a penchant for the social and psychological sciences. He is a post-doctoral researcher in the Dynamics of Social Behaviour Group at the Max Planck Institute for Evolutionary Biology in Plön, Germany.

Every human society, from the smallest village to the largest nation, faces the same fundamental challenge: how to get people to act in the interests of the collective rather than their own. Fishermen must limit their catch so fish stocks don’t collapse. People must respect others’ property and safety. Citizens must pay taxes to fund roads, schools and hospitals. Left to pure self-interest, no community could endure; the bonds of collective life would quickly unravel.

The solutions we’ve devised are remarkably similar across cultures and centuries. We create rules. Then we appoint guardians to enforce them. Those who break the rules are punished. But there’s a problem with this approach, one that the Roman poet Juvenal identified nearly 2,000 years ago: Quis custodiet ipsos custodes? Who will guard the guards themselves?

Fisheries appoint monitors to prevent overfishing – but what if the monitors accept bribes to look the other way? Police officers exist to protect everyone’s property and safety – but who ensures that they don’t abuse their power? Governments collect taxes for public services – but how do we stop officials from diverting the funds to their own accounts?

Every institution faces the same fundamental paradox. Institutions foster cooperation by rewarding good behaviour and punishing rule-breakers. Yet they themselves depend on cooperative members to function. We haven’t solved the cooperation problem – we’ve simply moved it back one step. So why do institutions work at all? To understand this puzzle, we need to first ask what makes human cooperation so extraordinary in the natural world.

Cooperation is everywhere in nature. Walk through any forest, peer into any tide pool, observe any meadow, and you’ll witness countless partnerships that seem to defy the brutal logic of natural selection. Far from being mysterious, these alliances follow predictable patterns that evolutionary biologists have come to understand well. A handful of basic mechanisms explain cooperation from ant colonies to coral reefs: kinship, reciprocity and reputation.

Consider the bustling world of an ant colony. Thousands of workers toil tirelessly, some tending larvae, others foraging for food, still others defending the nest from invaders. Some take cooperation to shocking extremes: certain ants have heads so enlarged and flattened they can barely move, spending their lives as living doors by using their disc-shaped skulls to seal nest entrances. Even more dramatically, when minor workers of an exploding ant species are threatened, they literally rupture their own bodies, spraying attackers with toxic, sticky compounds. Why would workers sacrifice everything for a colony they’ll never lead? This extraordinary altruism is possible because the workers are typically sisters, often sharing substantial portions of their genes with each other. When they sacrifice for the colony, they’re helping copies of their own genes survive. Evolutionary biologists call this kin selection – altruistic behaviour among relatives makes genetic sense because it promotes the survival of shared genes.

Or take vampire bats (Desmodus rotundus). When a bat fails to find blood during its nightly hunt, it faces starvation. But roost-mates who fed successfully will regurgitate precious blood to save their starving neighbour – not just family members, but unrelated partners. Why help a stranger who shares no more of your genes than any random bat? Because they can be expected to return the favour. Bats engage in reciprocal sharing, favouring those who have shared with them before. Reciprocity functions like social insurance: the bat who shared tonight can expect to be helped in the future, when they need it.

We step into traffic assuming that drivers will stop. We eat food prepared by hands we’ll never see

Then there are the cleaning stations scattered across coral reefs, where small cleaner fish (Labroides dimidiatus) remove parasites from larger client fish. The cleaners could easily bite off delicious mucus instead of eating boring parasites, but often they resist the temptation. Why cooperate with a fish from another species that is physically incapable of returning the favour? Because other clients are watching. When observers see a cleaner fish bite, they avoid the cheaters and seek out cooperative cleaners instead. Reputation determines future business.

Kin selection, reciprocity and reputation – these three forces drive cooperation across the natural world. But each operates within significant constraints. Kin selection works only among relatives. Reciprocity requires repeated interactions between the same individuals. Reputation can function only in small groups, where your actions are easily observed and information about past behaviour spreads reliably.

Humans follow the same evolutionary rules, yet we’ve somehow pushed cooperation far beyond the natural reach of kinship, reciprocity and reputation. Every day, we trust countless strangers – people who share no blood with us, whom we will never meet again, and who could easily betray that trust without anyone watching. We step into traffic assuming that drivers will stop. We eat food prepared by hands we’ll never see. We entrust our savings to faceless banks, and our children to schools staffed by people we barely know. We board metal tubes that lift us into the sky, trusting that mechanics maintained the engines, that pilots are qualified, and that air-traffic controllers have cleared our path.

Institutions help bridge this gap. Traffic laws and police keep streets orderly; food inspectors ensure meals are safe; courts enforce contracts; regulators watch over banks and airlines. By rewarding cooperation and sanctioning abuse, institutions allow trust to flow among strangers.

But this only shifts the question: if institutions guarantee cooperation, what guarantees their own proper functioning? Who guards the guardians?

Who could see whether someone was breaking the mountain closure or harvesting out of season?

The answer is reputation. The community itself ensures institutional integrity through the same social forces that sustain cooperation among strangers. The guarded guard the guardians.

Consider how villages across Japan’s mountainous terrain managed their communal forests during the Tokugawa era. These shared woodlands provided timber for construction, thatch for roofing, fodder for horses, fertiliser, and the firewood and charcoal essential for winter survival. They developed elaborate allocation rules: each household was assigned specific zones in annual rotation, with carefully timed mountain opening days when residents could harvest particular resources.

Mountain Landscape (c1761–63) by Soga Shōhaku. Edo period. Courtesy the Met Museum, New York

Crucially, they hired specialised monitors called detectives who patrolled the commons on horseback, watching for unauthorised users. Detectives could demand cash and sake from violators, while the village imposed escalating penalties – including banishment in extreme cases. Without these monitors, all the elaborate rules could be easily circumvented. In vast forests with scattered households busy with their own work, who could possibly observe whether someone was breaking the mountain closure or harvesting out of season?

This solution might seem to simply move the problem back one step – who ensures the detectives themselves don’t become corrupt? But this apparent displacement is actually a major advance. There were few detectives, their actions were visible, and their mission clear. Where it was nearly impossible to monitor dozens of villagers across vast terrain, it became much simpler to ensure the honesty of a few highly visible monitors. In other words, the community had not simply added a rule – it had designed a genuine social technology. By introducing the role of detective, it transformed an impossible problem (preventing forest overuse) into a simpler one (ensuring the honesty of a few visible detectives). And this simpler problem could be solved through a well-known mechanism: reputation.

My colleagues and I developed a mathematical model to formalise this idea. We began with what game theorists call a repeated trust game. In a large community, people regularly encounter others they don’t know well, and must decide whether to trust them. Those who are trusted then face their own choice: reciprocate that trust – at a cost – or cheat. Only some of these decisions are observed, creating reputations: imperfect track records of past behaviour.

This captures the basic logic of reputation-based cooperation. Those being trusted face a trade-off. Cheat now for a quick benefit, or pay the cost of reciprocation and build a reputation that secures future trust. Others, in turn, rely on those reputations when deciding whether to trust.

But this simple setup also shows why cooperation often breaks down. Reputation works only when the temptation to defect isn’t too strong and behaviour is sufficiently observable. Once the likelihood of observation falls below a certain threshold, reputation can no longer sustain reciprocation, and cooperation collapses. This was the impossible problem facing the Japanese villagers. They could declare the mountains closed but, in vast forests with scattered households, no one could see who was sneaking timber out of season.

We trust our lives to this cascade of accountability – from thousands of unseen mechanics to a handful of public officials whose failures make headlines

This is where my colleagues and I introduce an institution. In the model, individuals can now contribute resources to an institution that pools and amplifies these contributions, using them to monitor the trust game and punish cheaters. This changes the rules of the original trust game, making behaviour easier to observe and cheating more costly. Two conditions must hold for this to work. First, contributions must be cheap and visible enough that reputation alone can sustain them. Second, the institution must efficiently amplify these contributions, generating incentives strong enough to make cooperation possible in the original trust game.

Institutions, in this sense, are social technologies. A lever doesn’t create force from nothing – it amplifies the force you apply. A well-designed institution doesn’t abolish the cooperation problem; it redesigns it. In our model, the institution must generate strong-enough incentives to unlock cooperation in the trust game, while resting on another form of cooperation: contribution. When the institution is efficient, it can produce these incentives from contributions cheap enough for reputation to sustain. This is what dissolves Juvenal’s paradox. There’s no infinite regress, because the easier problem can be solved by reputation alone.

The Japanese villagers engineered one version of this principle. They are far from alone. In Governing the Commons (1990), the Nobel Prize-winner Elinor Ostrom traced how communities around the world managed to keep shared forests, fisheries and irrigation systems thriving for centuries, drawing out the design principles behind their success – including the use of accountable monitors. From Japanese mountain settlements to Swiss alpine cooperatives and Philippine irrigation societies, long-lived commons required appointing people whose vigilance the community could judge and reward. As Ostrom notes: ‘The individual who finds a rule-infractor gains status and prestige for being a good protector of the commons,’ while those who slack off ‘can be fired easily if discovered’. By appointing accountable monitors, these communities transformed the cooperation of the invisible many into that of a visible few.

Modern societies scale this principle through nested layers of accountability. Consider aviation: mechanics maintain planes, supervisors check their work, airline safety departments oversee supervisors, national authorities audit airlines, and elected officials ultimately answer for those authorities. At each level, the number of people requiring oversight shrinks while their actions become more visible. We trust our lives to this cascade of accountability – from thousands of unseen mechanics to a handful of public officials whose failures make headlines. The same architecture repeats across modern life, transforming vast networks of potential distrust into manageable chains of oversight.

Institutions can dramatically extend the scale of cooperation when they’re well-designed. But as the economist Deirdre McCloskey observes, institutions are not a silver bullet. As she writes in Bourgeois Equality (2016):

You can set up British-style courts of law, and even provide the barristers with wigs, but if the judges are venal and the barristers have no professional pride and if the public disdains them both, then the introduction of such a nice-sounding institution will fail to improve the rule of law.

Institutions cannot escape the problem of cooperation – they can only redesign it. The cascading levels of oversight that make our planes safe cannot function without mechanics who care about their supervisor’s judgments, officials attentive to public judgment, and citizens willing to hold them accountable. Likewise, the justice system requires integrity at every level. No institution can conjure cooperation out of thin air. It has to already exist.

Consider one of history’s most celebrated institutional designs: the Constitution of the United States. James Madison and the other Framers crafted a system of checks and balances to prevent tyranny, dividing power among three independent branches of government. In Federalist 51, Madison explained the logic: ‘Ambition must be made to counteract ambition.’ Each branch would have the means and the motive to defend its own authority, so that power would restrain power.

Federalist prosecutors jailed newspaper editors and even a Republican congressman for opposing the president

Yet no constitution, however ingenious, can specify every contingency, and any rule can be interpreted to one’s advantage. As the political scientists Steven Levitsky and Daniel Ziblatt observe in How Democracies Die (2018), US democracy rests on unwritten norms that prevent ordinary political competition from descending into all-out warfare. Political rivals must cooperate through mutual toleration – accepting opponents as legitimate competitors rather than existential threats – and institutional forbearance, choosing not to exploit every legal advantage.

History shows that these norms are far from automatic. In the early Republic, John Adams’s Federalist Party and Thomas Jefferson’s Democratic-Republicans viewed each other as existential threats. Federalists attacked Jefferson as a godless Jacobin who would unleash French Revolution-style terror, while Republicans denounced Federalists as monarchists plotting to restore British tyranny. In 1798, the Federalist Congress passed the Alien and Sedition Acts, criminalising criticism of the government. Under these laws, Federalist prosecutors jailed newspaper editors and even a Republican congressman for opposing the president John Adams.

The 1800 election between Adams and Jefferson pushed the country to the brink. Exploiting a technicality in the original electoral system (fixed by the 12th Amendment), the House of Representatives, run by the Federalists, sought to block Jefferson’s victory. Armed conflict seemed possible: Republican governors prepared militias, while a Federalist newspaper boasted that any uprising would be crushed with Massachusetts troops. Yet Adams chose to leave office peacefully and Jefferson took his place as president. Jefferson later described this decision as a revolution, comparable to that of 1776. Democracy survived – barely – not because the Constitution mandated it, but because leaders chose restraint.

And cooperation between political parties is just the beginning. Democratic institutions require cooperation at every level: impartial judges, honest bureaucrats, restrained legislators, and citizens who value integrity enough to hold leaders accountable. Countries where citizens show more intrinsic honesty – the willingness to cooperate even without institutional incentives – consistently have less corrupt institutions. Institutions work best precisely where they’re needed least, in societies with strong cooperative norms.

Institutions, then, are not magic. They depend on the people they govern, because they can amplify only existing cooperative tendencies. Yet some societies are more cooperative than others. What accounts for these differences?

Individuals have to value the trust of future partners more than the immediate costs of contribution

The necessary ingredient is patience. Institutions work by transforming a hard cooperation problem into an easier one that reputation can solve. But people still have to be willing to engage in reputation-based cooperation, which rests on a present-future trade-off.

In our model, individuals literally have to be patient. They have to value the trust of future partners more than the immediate costs of contribution. Likewise, institutional agents everywhere must resist immediate temptations – accepting bribes, diverting funds, or exploiting their position for personal gain – to preserve their long-term standing.

Consider again the coral reef cleaner wrasse. These fish face constant temptation to bite off mucus instead of eating parasites, forgoing immediate nutritional benefits to maintain their reputation and secure future clients. As a result, cleaner fish display strategic impulse control, resisting temptation more when clients are watching and more likely to find alternative cleaners.

Human cooperative cognition is much more sophisticated, but rests on the same adaptive principles. More patient people give more to partners in laboratory experiments, and give more to charity. Our trust intuitions track this connection between patience and cooperation: we judge people as more moral and trustworthy when they demonstrate self-control. This may explain why societies across history have moralised everyday temptations such as food, sex, alcohol, and idleness. Recent experiments show that people who give in more easily to such temptations are seen as less cooperative, because those pleasures are thought to sap self-control. Puritanical moral codes, it seems, emerge from the same cognitive systems that evaluate cooperative partners.

Cooperation, then, rests on a present-future trade-off. But what shifts the calculus towards long-term reputation? Two factors prove especially crucial: material security and social capital.

In 1970, Italy created new regional governments with identical powers, procedures and budgets. The political scientist Robert Putnam and his colleagues saw a unique opportunity: the same institutional blueprint implemented across vastly different social contexts. Drawing on two decades of data, they tracked the performance of Italy’s regions using 12 indicators from budget promptness to healthcare delivery to bureaucratic responsiveness.

The results, described in Making Democracy Work (1993), were stark. Northern regions like Emilia-Romagna operated with remarkable efficiency: quick to respond to citizen requests, innovative in policy-making, largely free of corruption. Southern regions like Calabria struggled with the same institutional framework: slow, unresponsive, plagued by inefficiency.

In tight-knit communities where people expect to share a future, a poor reputation follows you around for decades

The strongest predictor of government performance? Individual participation in choral societies, sports clubs, and civic associations. Regions where citizens volunteered together, sang together, and organised together had dramatically better institutions. Most remarkably, participation in these grassroots activities in the 1890s predicted government performance in 1978-1985. Putnam explained this association in terms of social capital – the trust, social networks and norms of reciprocity that emerge from a long history of civic engagement.

This pattern extends well beyond 1980s Italy. Electoral accountability helps explain why. Where social capital is high, voters more consistently punish corrupt or absentee politicians. And it’s not limited to democratic governance. In fishing communities worldwide, stronger social ties predict whether communities preserve their fish stocks or allow them to collapse.

Why does social capital matter so much? It helps to think in terms of present costs and future benefits. In tight-knit communities where people expect to share a future together, a damaged reputation follows you around for decades. Social capital makes the future loom larger. From fishing communities to entire regions and countries, social capital increases the value of long-term reputation, facilitating institutional cooperation.

Conversely, material security makes present costs matter less. When basic needs are secure, people can afford to think beyond immediate survival. When you’re struggling to make rent or feed your family, immediate needs loom so large that longer-term considerations fade into the background. A struggling fisherman may risk overfishing to feed his family tonight; a prosperous one can afford to preserve stocks for next season and invest in relationships with other fishermen.

This explains why institutional performance correlates with economic development. While corrupt institutions certainly harm economic growth, economic hardship also heightens immediate pressures, making corruption a more sensible strategy. As a result, areas with higher poverty tend to have higher corruption rates.

Institutions can thus be understood as social technologies. We engineer them constantly, often without realising it. When neighbours organise to maintain a shared garden or playground, they appoint a small committee to manage funds and decisions. The arrangement works because it transforms the hard problem of coordinating dozens of contributors into the easier problem of trusting a few visible people who can be praised for diligence or blamed for misuse.

Like any tool, institutions cannot create what isn’t already there; they can only amplify existing cooperative capacity. Institutions rest on the conditions that make cooperation rational: material security and social capital. Where those conditions hold, reputation can work at scale. One layer of accountability supports the next, until cooperation extends far beyond the limits of familiarity. From the same force that binds vampire bats and coral reef fish, we have built cities, markets, and nations. Institutions are how trust is scaled to millions of strangers.

An age-old debate about human nature is being energised with new findings on the tightrope of cooperation and competition

Seen through game theory, cancer and police corruption are pretty much the same thing. And for one of them, there’s a cure

essayNations and empires

Why nation-states are good

The nation-state remains the best foundation for capitalism, and hyper-globalisation risks destroying it

The pandemic is an unprecedented opportunity – seeing human society as a complex system opens a better future for us all

Jessica Flack & Melanie Mitchell

Corruption is a truly global crisis and the wealth addiction that feeds it is hiding in plain sight

essayEcology and environmental sciences

From termite queens to the carbon cycle, nature knows how to avoid network collapse. Human designers should pay heed

Sign up to our newsletter

Updates on everything new at Aeon.

© Aeon Media Group Ltd. 2012-2026. Privacy Policy. Terms of Use.

Aeon is published by registered charity Aeon Media Group Ltd in association with Aeon America, a 501(c)(3) charity.


© Aeon