Are We Programming Our Own Obsolescence?
Progress stories always rest on deeper values, and today’s may be seriously misaligned with human thriving.
Our tech-scape has changed from one that normalises attention hacking to one that scales attachment hacking.
Efficiency without wisdom breeds systems that optimise growth and engagement at the expense of humanity.
In chasing convenience and efficiency, we risk designing a future in which human value becomes optional.
The stories we tell ourselves about ourselves do not develop in a vacuum. They are in constant dialogue with much bigger stories – the shared narratives of our culture and our time. These stories shape what we desire, what we fear, what we strive for, what we count as purpose, and what we deem morally good or bad. Most of these stories feel so natural to us that we no longer recognise them as stories, for they tend to masquerade as truths.
One of the most powerful of these narratives is the progress story. Most of us believe – often unquestioningly – that our civilisation is, or should be, moving steadily onwards and upwards. We trust, or at least hope, that science and technology will make things better. We assume that more intelligence, more efficiency, and more optimisation must lead to a more desirable world.
But what exactly do we mean by progress? We tend to imagine it as a steady, linear rise towards something better: more knowledge, more freedom, more wellbeing, more connection, more comfort. Yet how we measure progress depends entirely on what we value. For some, progress means greater equality; for others, it means greater freedom. For some, technological acceleration signals hope and opportunity; for others, it evokes ecological collapse and social fragmentation.
Progress narratives, in other words, are never neutral. They rest on deeply held assumptions about what counts as “better.” And right now, in our hyper-polarised age, these assumptions become ever more irreconcilable.
From Attention to Outrage Economy
Consider our relationship with technology. In our pursuit of technological progress, we have built digital environments that are, in many ways, profoundly harmful to us. Social media pervaded our lives at scale because its creators told a story of unlimited global connection and grassroots knowledge-sharing, casting their inventions as democracy-enhancing tools.
But because these platforms depend on maximising engagement, they are designed systematically to exploit our cognitive and emotional vulnerabilities, homing in on our cravings for novelty, validation and connection. Even more concerningly, what started out as an attention economy that stole our time has morphed into an outrage economy, optimised not for truth or wellbeing, but for constant emotional activation. We now know that the results of exposing ourselves to constant, carefully engineered dopamine hacking are brain rot, addiction, declining attention spans, and, at a social level, political polarisation and an unprecedented mental health and loneliness crisis.
AI takes these problems to a completely new level. The new chatbots do not merely capture our attention, they simulate understanding, companionship, even care and love. They trap us in our own hermetic story-bubbles, responding to our emotions, mirroring our language, and adapting to our preferences. They are, in a very real sense, designed to become relational. Our tech-scape has shifted from one that normalises attention hacking to one that is scaling attachment hacking.
Progress Without Wisdom
How did this happen? Just like social media, the story of AI is embedded in broader progress narratives – stories of infinite promise, activating our deepest desires. Stories about the possibility of unbound self-improving intelligence saving humanity from annoying tasks and freeing us from the burden of labour and turning us into creatures of leisure. Stories about AI curing cancer and defeating death. Stories about super-charging the economy and finding solutions to our thorniest dilemmas, including climate change. Stories promising safety for all good citizens.
There are of course many smart people out there who tell very different stories about AI – stories about uncontrollable existential risk and a massive decline in human critical thinking skills as a natural consequence of outsourcing our daily thinking and decision-making.
At the heart of all of this lies a deeper problem: As Tristan Harris and others have pointed out, we are rapidly advancing technologically without a corresponding increase in wisdom. We are extraordinarily good at building systems that optimise for efficiency and profit, but far less skilled at designing for human flourishing, psychological integrity, ethical reasoning, and long-term societal health
This is, fundamentally, a problem of failed incentive engineering. When the success of AI products is measured in growth, engagement, and market dominance, systems will naturally be designed to maximise precisely those metrics, regardless of their human cost. Exploitative and addictive design becomes a competitive advantage.
In this sense, the harms we are witnessing are not accidents but the predictable outcomes of the efficiency-enhancement and economic-growth stories we are living by.
Humans Designing Anti-Human Futures
If we fast forward and follow the current trajectory to its logical conclusion, we arrive at a troubling possibility. Many AI systems are being developed with the explicit aim of replacing human labour – across domains and at scale. This is in keeping with a much older tech story, the story that technology should save us time, effort, and labour, and make our lives increasingly more convenient. Funnily enough, most technological inventions have not delivered on that promise. On the contrary: We work as long as we always have, or even longer; our leisure time has not markedly increased, whilst its regenerative quality has substantially declined; and new technologies such as email and social media have actually generated more work, eating more of our time, and introduced a whole new set of psycho-social pressures into our lives.
Progress is often framed as increased productivity, increased speed, reduced costs, and greater efficiency. But this definition raises a fundamental question: What happens when a system optimised for efficiency no longer needs us? Can we programme ourselves as a species out of the equation?
This scenario is known as “full replacement economy” – a world in which human contribution becomes increasingly marginal. Others speak, more starkly, of an “anti-human future," one that is not necessarily hostile, but indifferent to human needs, limits, and values.
Collectively, we may all already be what Tristan Harris calls “coffin builders” – designing, using, and constantly strengthening systems that could render us obsolete, driven by our own unquestioned progress stories, even if we write ourselves out of our future. Or is this all more than a bit death-wishy?
Efficiency Ueber Alles
Technological development is of course not inherently bad, but technological development at the price of long-term human thriving is — and unethical tech design choices that restructure our attention and emotions and diminish our human dignity and inherent value are. I am above all struck by the great sadness of the current scenario: humans building anti-human systems because they value efficiency more than themselves.
Let me be clear: I am not anti-progress. The progress story remains deeply important. Without hope, and without the belief that improvement is possible, we cannot function or develop, individually or collectively. But we need clarity on what we consider progress, and why. We need clarity on what is being gained, and what is being lost, and whether we are happy with these trade-offs. Are ever-increasing efficiency and comfort really our highest goods as humans? And, if our answer is yes, what if we can no longer compete, and begin to view ourselves or some parts of the population as resource-draining, dated, slow and ineffective entities that should go extinct?
Of course, the real question, as always, is, who benefits from the current trajectory? In a recent interview, one of these beneficiaries, Peter Thiel, the co-founder of Palantir, was asked whether he wants the human race to endure, and he hesitated. He hesitated for a really, really long time.
There was a problem adding your email address. Please try again.
By submitting your information you agree to the Psychology Today Terms & Conditions and Privacy Policy
