menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

Deep thinking needed on AI, not shallow predictions

17 0
22.02.2026

Confident predictions about artificial intelligence dominate public debate – but history suggests forecasting technological futures is a poor guide for policy. What matters more are the conditions that shape how AI is actually used.

Every week brings another confident prediction about artificial intelligence. Technology executives promise transformation. Consulting firms project massive productivity gains. Union leaders warn of job losses. AI researchers debate existential risk.

Each prediction generates headlines, and each contradicts the others.

Policymakers tasked with regulating AI, investing in infrastructure, or preparing the workforce are left to navigate this noise. Which expert should they believe? The honest answer is that nobody knows what AI will do. The better question is whether prediction is even the right approach.

The history of technological forecasting should give us pause. In the 1960s, experts predicted nuclear-powered cars and moon bases by 2000. In the 1990s, the paperless office was just around the corner. More recently, self-driving cars were supposed to be ubiquitous by now. Technologies that were expected to transform society often disappointed, while technologies dismissed as toys became foundational infrastructure.

The pattern repeats because prediction requires knowing things that cannot be known: how technologies will evolve, how organisations will adapt, how regulations will develop, how users will respond. Confident forecasts about AI’s trajectory offer the comforting illusion that the future can be known and planned for. That comfort is false.

There is a different approach. Instead of asking what AI will do, we might ask what determines whether AI creates value or causes harm in any given context. That question can be answered, and the........

© Pearls and Irritations