menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

Without effective regulation of AI, society is facing a head-on collision with a driverless car

25 0
10.03.2026

A self-driving vehicle ploughs into an oncoming car, combusting the occupants and leaving those who survive battered and bruised and staring into their devices wondering who is to blame.

That’s the jump off point to Bruce Holsinger’s tech-lit bestseller Culpability, an exploration of agency and responsibility in the era of AI through the eyes of a lawyer, an ethicist and their screen-dependent offspring.

It’s also a broader description of our current moment as this self-propelling technology accelerates exponentially before it has been fitted with brakes, seatbelts, speed limits or a working GPS.

Working back from the crash, Holsinger skilfully weaves together the concurrent lines of causation: those who design the tech, those who deploy it, those who use it and, most profoundly, the spaces that overlap.

“Culpability” lies in these grey areas of legal and moral accountability where we are still thinking through formal and ethical rules of engagement, which themselves are strapped to the bonnet of this out-of-control vehicle.

Right now, there is justified focus on the responsibilities of those building the Large Language Models and taking them to market, even as they struggle to explain how they work or how they can be deployed safely.

When not dropping bombs on Iranian schoolgirls, the White House has been at war with its own broligarchcy, demanding the right to use AI models to surveil their citizens and fire autonomous weapons. Spoiler: Anthropic pushed back; OpenAI bent over.

Closer to home, our policymakers are struggling to come up with a coherent response to how these........

© The Guardian