Governments are rushing to embrace AI. They should think twice

Governments across the world want AI to do more of the heavy lifting when it comes to public services. The plan is apparently to make make things much more efficient, as algorithms quietly handle a country’s day to day admin.

For example, AI might help tackle tax fraud, by working out ways of targeting those most likely to be offending. Or it might be to help public health services screen for various cancers, triaging cases at scale and flagging those deemed most at risk.

But what happens when such a triaging system makes a mistake? Or when government agencies deploy AI to identify fraud and the model simply gets it wrong?

There is already sobering evidence that AI errors can have devastating consequences. In the Netherlands for example, flawed algorithmic assessments of tax fraud were dealt with in ways which tore families apart and separated children from their parents.

In that case, a risk‑scoring system was used to identify families it deemed likely to be committing benefits fraud. It then fed these assessments into automated operations that ordered repayments, driving innocent households into financial ruin.

So states should be extremely wary of substituting human judgement with AI. The assumption that machines will almost always get it right is simply not true. People’s lives cannot be easily reduced to data points for algorithms to draw conclusions from.

And when things do........

© The Conversation