Is Algorithmic Asymmetry Reshaping How We Think?
Algorithms measure is a ghost of a pattern that once looked like someone statistically adjacent to you and me.
Understanding the social systems in which the artificial systems operate matters.
Algorithmic asymmetry persists because most people assume someone else will push back.
Algorithms are growing more powerful by the year. What they measure is a ghost of a pattern that once looked like someone statistically adjacent to you and me. The gap between past and present is costing us more than we notice.
Machines that never met you
Imagine you are standing on one side of a traditional scale. On the other side: a machine that has never met you, never been hungry, never buried anyone, never changed its mind at three in the morning. The machine nonetheless decides, in milliseconds, whether you are creditworthy, whether your job application clears a filter, whether your medical scan warrants a specialist's attention, whether you are shown the news story that confirms your fears or the one that complicates them.
That machine is running on an algorithm. And that algorithm, almost certainly, knows far more about the average person in its training data than it knows about you, specifically, today.
This is algorithmic asymmetry—a structural condition of contemporary life that reaches everyone: the 22-year-old applying for a first apartment, the 45-year-old whose job application disappears into a screening system, the 68-year-old whose insurance premium is quietly recalibrated. The scale tips. It rarely tips back.
What algorithmic asymmetry actually is
Asymmetry in mathematics is the absence of equivalence on both sides of a relationship. In algorithmic systems, the asymmetry is threefold.
First, those who design the system and those who are shaped by its outputs have radically unequal access to information about how it works—a phenomenon researchers call the opacity problem. Second, the data an algorithm learns from encodes the inequalities of a past moment, then projects them forward with the full weight of computational authority—what scholars term historical bias amplification. Third, the feedback loop does run in two directions—but not symmetrically. Our behaviour patterns continuously feed and recalibrate the system. The system, in turn, increasingly shapes those very behaviour patterns and the attitudes beneath them. We train it; it trains us back. The asymmetry lies in who is aware that this is happening, and who retains any meaningful ability to intervene.
When an algorithm screens out a job applicant with a non-linear career path, or flags a first-generation immigrant as a credit risk, or serves an anxious teenager a spiral of content that deepens the anxiety—the system is matching a statistical shadow against a population model built on data that was never designed to represent everyone fairly.
Optimising perfectly for the wrong thing
Algorithms minimise loss functions—mathematical expressions of what the system is trying to get right. This sounds like a distinction only an engineer would care about, until you realise that loss functions are written by people, at a particular moment, reflecting a particular theory of what matters. That has real consequences on real people.
For example, a widely used healthcare algorithm systematically underestimated the medical needs of Black patients because it used historical healthcare costs as a proxy for need—not accounting for the structural inequalities that had suppressed those costs in the first place. The algorithm was working perfectly. The design was the problem.
Scale this logic, and the pattern appears everywhere. A recommendation engine optimising for engagement will serve outrage before nuance, every time. A hiring algorithm optimising for retention will screen out anyone who looks like they might leave—which often means anyone with a caregiving gap, a relocation, or an unconventional route into the field. A content platform optimising for time-on-screen will keep a depressed adolescent scrolling longer. The documented harms span age, race, gender, and geography.
What this demands from us
Understanding the problem is not enough. Empowered action is the counterpart to awareness. A hybrid world requires double literacy, which includes both human literacy (understanding and cultivating our own cognitive capacities, limitations, and values) and algorithmic literacy (understanding how artificial intelligence [AI] systems work, and how they influence our ability to think, feel, and interact autonomously). Without both strands, participation is empty.
Although structural measures are required at scale, we do not have to wait until policymakers and technology corporations act. Investing in double literacy does not require a dismantling of algorithmic systems. It requires holding them to the same standard of accountability we have always demanded of institutions that make consequential decisions about human lives. And that requires human agency.
What clarity requires from each of us
Systemic change and individual agency work as complementary gears. Policy pressure succeeds when individuals understand what they are pressing for. And individual choices accumulate into the cultural conditions that make policy change possible.
Understanding the social systems in which the artificial systems operate matters: someone chose what to measure; someone chose which data to trust; a third party decided the outputs were good enough to act on at scale. Each of those choices was contestable. Most were never contested. Algorithmic asymmetry is artificial and assembled. It can be reassembled differently
Take-aways: the ABCD of action
A — Aspire to legibility. Make it a personal standard that any system making consequential decisions about your life owes you an explanation in plain language. Ask: What data was used? Who built this? What was it optimised for? Demand the same transparency from algorithmic institutions that previous generations demanded from human ones.
B — Believe that your experience counts as data. Lived knowledge is not anecdote. Whether you are 22 or 72, your experience of navigating systems that were not built with you in mind is precisely the kind of evidence that audits, lawsuits, and policy reforms are built from. When an algorithmic output contradicts what you know from experience, that friction is worth naming—to regulators, to ombudspeople, to journalists, to elected representatives.
C — Choose one domain to understand deeply. You do not need to master every algorithm that touches your life. Pick the one with the highest personal stakes—healthcare, credit, employment, housing—and learn its logic well enough to ask one sharp question. Organisations such as the AlgorithmWatch network and the AI Now Institute publish accessible guides on exactly this. One good question, consistently asked, changes the conversation.
D — Do speak up in rooms where decisions are made. Algorithmic systems are corrected through pressure—regulatory, civic, and individual. Challenge an automated decision. Request a human review. Respond to public consultations on AI governance. Support the organisations mapping algorithmic harm. Write the letter. Show up to the hearing.
Algorithmic asymmetry persists because most people assume someone else will push back. Could that someone be you?
There was a problem adding your email address. Please try again.
By submitting your information you agree to the Psychology Today Terms & Conditions and Privacy Policy
