We’re talking about AI all wrong. Here’s how we can fix the narrative
Artificial Intelligence (AI) isn’t just made up of data, chips and code – it’s also the product of the metaphors and narratives we use to talk about it. The way we represent this technology determines how the public imagination understands it and, by extension, how people design it, use it, and its impact on society at large.
Worryingly, many studies show that the predominant representations of AI – anthropomorphic “assistants”, artificial brains, and the omnipresent humanoid robot – have little basis in reality. These images may appeal to businesses and journalists, but they are rooted in myths that distort the essence, abilities and limitations of current AI models.
If we represent AI in misleading ways, we will struggle to truly understand it. And if we don’t understand it, how can we ever hope to use it, regulate it, and make it work in ways that serve our shared interests?
Distorted representations of AI are part of a common misconception that the academic Langdon Winner dubbed “autonomous technology” back in 1977: the idea that machines have taken on a life of their own and act independently on society in a purposeful and often destructive way.
AI gives us the perfect incarnation of this, as the narratives surrounding it flirt with the myth of intelligent, autonomous creation – as well as the punishment for assuming this divine function. It is an ancient trope, one that has given us stories ranging from the myth of Prometheus to Frankenstein, Terminator, and Ex Machina
This myth is already hinted at in the ambitious term “artificial intelligence”,........
