Geist in the machine
As the 18th-century war between mechanism and romanticism returns, we face a new question: can we build artificial souls?
by Peter Wolfendale BIO
Photo by Christopher Furlong/Getty Images
is an independent philosopher. He is the author of Object-Oriented Philosophy: The Noumenon’s New Clothes (2019) and The Revenge of Reason (2024). Normally based in the Northeast of England, he is currently living in Yokohama, Japan.
Edited byCameron Allan McKean
What is unique about human beings? What makes us especially successful, interesting or valuable in comparison with other creatures? This thing – whatever it is – has gone by many names over the centuries: the human condition, the human spirit or, more classically, the soul. Regardless of the name, there have always been those keen to explain away this uniqueness, perhaps arguing that we’re merely one species of animal among many. And on the other side, there have always been those eager to render it ineffable, often claiming it to be a spark of the divine.
The conflict between these extremes is taking a new form as glimmers of a soul appear in the latest wave of artificial intelligence – large language models (LLMs) like ChatGPT, Claude and Gemini. For every human ability these systems learn to mimic, we can find someone claiming they’re basically indistinguishable from us, and someone else arguing they’ll never really be like us. An exchange between the OpenAI CEO Sam Altman and the linguist Emily Bender is emblematic of this dynamic, with Bender publishing a paper arguing that LLMs are nothing but ‘stochastic parrots’, spewing predictable but meaningless words, and Altman Tweeting: ‘i am a stochastic parrot, and so r u’.
This isn’t the first machine culture war. In the 18th and 19th centuries, the Industrial Revolution reconfigured society, turning more and more people into cogs in the factory system. On one side were those who saw society itself as a machine to be rationalised. Thinkers such as Jeremy Bentham and Auguste Comte came to believe that how we live could be optimised and governed according to calculable laws. Human behaviour was something to be technically measured and managed. On the other side were those who championed the Romantic virtues of subjective feeling, individual genius and organic nature against the ascendancy of mechanism. Figures such as Samuel Taylor Coleridge and Karl Wilhelm Friedrich Schlegel argued that this clockwork model of society endangered the very qualities that make us human.
A stipple engraving representing mechanical philosophy, or 18th-century physics (1816) by John Chapman. Courtesy the Wellcome Collection
Romanticism eventually gave way to modernism, and then postmodernism, but its influence lingered on throughout the 20th century. Even after the Information Revolution transformed society through automation and rationalisation, the bastions of Romanticism – artistic inspiration and scientific insight – remained largely the province of humanity. Now in the early 21st century, AI is reigniting the conflict, and new strains of rationalism and romanticism are fighting it out on disparate fronts, debating the destiny of science, art and politics. This is a war over whether technology will merely optimise calculations or eliminate a quintessentially human element such calculations can’t capture. But beneath these debates, the question still lurks: what makes us so special? And can it be computed?
It’s easy to lose track of this question in all the noise and fury, not least because it’s hard to formulate precisely. The aim of this essay is to clarify the situation in three stages. We’ll begin by separating the dimensions of human uniqueness that philosophers and scientists have traditionally focused upon (intelligence, consciousness and personhood). But to make real headway, we’ll need to survey contemporary debates about the domains in which these key terms are operative (epistemology, aesthetics and ethics). Grappling with these controversies will then reveal the corresponding capacities combined in anything worth calling a soul (wisdom, creativity and autonomy).
Ultimately, I think we should take inspiration from Immanuel Kant and G W F Hegel. They claim it is our freedom that makes us unique. But it is only by analysing freedom’s component parts that we might understand and thereby recreate it – constructing spiritual machines that, far from replacing us, might join us in the pursuit of truth, beauty and right. Can we build artificial souls?
Join more than 270,000 newsletter subscribers
Join more than 270,000 newsletter subscribers
Our content is 100 per cent free and you can unsubscribe anytime.
Our content is 100 per cent free and you can unsubscribe anytime.
We need to begin at the beginning. When Abrahamic theologians wanted to understand the soul, they turned to Greek philosophy. Some 2,400 years ago, Plato taught that the soul is immortal and separable from the body in virtue of its capacity to reason, while Aristotle taught that the soul is what animates the body, and so plants and non-rational animals must also have souls. When early modern philosophers began comparing nature to the machinery reshaping their societies, these Greek ideas inflected their debates. Though there were some thinkers, such as Thomas Hobbes, who claimed that human beings are nothing but machines, others, such as René Descartes, claimed that, while animal bodies are just elaborate clockwork, we humans must possess a separable mind to represent the world. The idea that thought – reasoning and representing – distinguishes humans from other creatures persisted well beyond the philosophical and religious concerns that motivated it.
Gottfried Wilhelm Leibniz’s calculating machine featured in Miscellanea Berolinensia ad incrementum scientiarum (1710). Courtesy Wikimedia
During the 17th and 18th centuries, Gottfried Wilhelm Leibniz developed these ideas in two very different directions. On the one hand, he continued to argue that the mind couldn’t possibly be a machine, elucidating this with an analogy now known as Leibniz’s Mill. If a mind were really a machine, then it could be scaled up like a mill, so that we might walk inside and see its whirring components. However, nowhere within would we find the gestalt experience essential to human thought – an echo of Aristotle’s vision of the soul as that which makes an organism more than the sum of its parts. On the other hand, Leibniz also dreamed that reason could be mechanised, using something he called the ‘calculus ratiocinator’: a universal framework in which every dispute between competing intellectual positions could be resolved by means of simple calculation. He even designed one of the first mechanical calculators. This split prefigured the later opposition between Romanticism and rationalism.
If a machine can pretend to be a mind, then it simply is a mind
Leibniz’s dream of reducing reason to calculation peaked in the 1920s when the German mathematician David Hilbert made an ambitious attempt to formalise mathematics in a way that might yield an algorithm for deciding the truth of arbitrary mathematical statements. This is known as ‘Hilbert’s programme’. Within a few years, the programme – and Leibniz’s dream – were crushed, first by Kurt Gödel’s incompleteness theorems, published in 1931, and then by Alan Turing’s proof of undecidability in 1936. However, crushing Leibniz’s dream did not stop or even slow the mechanisation of thought. Instead, Gödel and Turing uncovered the foundations of computation. By understanding what was impossible, they had begun to articulate not just what was possible, but also how to build it.
In the process, a new problem emerged: could we build a mind?
Turing argued that whether a machine can ‘think’ is too ill-defined. Instead, he asked whether a machine can behave in a manner that’s indistinguishable from a human under certain conditions (usually, a game in which it converses via text). ‘Turing tests’ dissolve the distinction between appearance and reality: if a machine can pretend to be a mind, then it simply is a mind.
So, if the question of whether a machine can think is too vague, and whether a machine can behave like a human is too shallow, how should we parse the question of whether a machine can be like us? In the decades since Turing proposed his test, philosophers and scientists have focused on three dimensions of human-likeness: intelligence, consciousness and personhood. These appear in academic papers and popular documentaries, shaping public anxieties and guiding policy. But in many cases, the terms are conflated, reducing debates about whether human-like machines are possible to talking at cross purposes. Even influential critics of AI, such as John Searle and Hubert Dreyfus, are not immune to this error. So, what are we really talking about when we ask what makes humans unique?
Intelligence dominates the discussion of human-likeness. The AI paradigm developed in the 1950s and ’60s – which came to be known as ‘good old-fashioned AI’ or GOFAI – followed Plato and Descartes in viewing intelligence as the capacity to acquire symbolic knowledge about the world (eg, ‘water boils at 100ºC’) and deduce solutions to practical problems (eg, how to boil an egg). By programming systems to follow explicit rules, such as calculating dosages or planning routes, researchers were able to make machines perform some human tasks. However, tasks that require implicit competence, such as making coffee or driving a car, are much more difficult. An algorithm that will easily direct a robot arm to make a cup of coffee in a controlled lab will break when it’s moved into an average kitchen. This is due to equipment changes and a host of other disruptions. There are too many potentially relevant factors to consider, and their relationships become exponentially harder to explicitly encode.
To make sense of all this, we can turn to one of the key thinkers who defined much of the debate about machine intelligence during the 20th century: the American philosopher Hubert Dreyfus. He argues that the comparative robustness of human intelligence lies in our ability to navigate the relationships between factors and determine what matters in any practical situation. He claims that this wouldn’t be possible were it not for our bodies, which shape the range of actions we can perform, and our needs, which unify our various goals and projects into a structured framework. Dreyfus argues that, without bodies and needs, machines will never match us. The current AI paradigm (often referred to as ‘machine learning’ or ML) aims to prove him wrong.
From left: the philosopher Hubert Dreyfus with Robert Purdy, a student, photographed by John Haugeland outside Dreyfus’s home in Berkeley, California, 24 August 1976. Courtesy Wikipedia
Under this new paradigm, intelligence is defined simply as the capacity to solve problems. Current AI systems are built to find implicit rules using whatever non-symbolic representations work. This is the neat trick performed by deep neural networks (DNNs), which, given sufficient amounts of raw data and computing power, can be trained to do things we know how to do but can’t codify (eg, facial recognition, spam filtering or strategic insight). LLMs are the pinnacle of this paradigm, built with trillions of parameters and trained on vast amounts of data and computing power. They are now capable of performing a range of language-based tasks, including casual conversation, summarisation and responses to domain-specific questions (with varying reliability). But the secret to their success is simply predicting the next most likely ‘token’ in a sequence. Many worry this process is nothing like human intelligence, even if its products are similar.
To treat a machine as a person would mean granting it moral worth and moral responsibility
However, the real controversy lies in whether such systems can generalise beyond the range of tasks they’ve been trained to perform. There’s now a specific term for this: artificial general intelligence (AGI). The meaning of this term has blurred in the two decades since it entered AI discourse. Even though LLMs remain less capable than most humans, some commentators have claimed that these neural networks are already AGIs, simply because they can do things they haven’t been explicitly........
