We said ‘never again’ after Robodebt. So why are we making the same mistakes? |
Few government failures have left scars as deep as Robodebt.
An automated debt‐raising system, rolled out with breathtaking confidence and almost no safeguards, wrongly accused hundreds of thousands of Australians of owing money to the Commonwealth.
People lost their savings, their wellbeing and, in some cases, their lives.
We told ourselves that this could never happen again, but we have not put anything in place to prevent the same mistakes being repeated.
New automated decision-making tools are being rolled out by the Government in aged care, the NDIS and beyond.
Aged care support packages are now calculated by an automated program. A human assesses the participant and feeds the data into the program, but they cannot override or change the program’s decision.
The NDIA is also looking to roll out a similar system to calculate support packages, without the ability for human decision-makers to override the automated decision.
In fact, even if you seek a review of your decision within the department or at the Administrative Review Tribunal, the reviewer can only instruct that another assessment take place and that the system make the decision again. A human does not make the final decision.
Experts are already warning that these programs are producing problematic and inaccurate outcomes, which is deeply concerning when these automated systems are determining life outcomes for the most vulnerable.
More than two years after the Robodebt Royal Commission recommended “legislative reform to introduce a consistent legal framework in which automation in government services can operate” and “establishment of a body to monitor and audit automated decision-making”, nothing has changed. And we are once again asking vulnerable Australians to trust systems they cannot see or understand.
None of this is an argument against automated decision‐making.
Done well, automation can help government make faster and more consistent decisions. It can reduce administrative burden, cut processing times and free public servants to focus on complex matters that need human judgement. But automation without safeguards risks a repeat of Robodebt.
That’s why we need a clear, legislated framework governing how automated decision- making is used across government.
Working with experts and community, I am proposing a framework to ensure automation delivers decisions that are both fairer and faster. The framework has three core pillars.
People deserve to know when an automated program has been used to make a decision about them.
They deserve a meaningful explanation of how that automated program makes its decisions generally and how it made a specific decision about their rights.
Transparency doesn’t only benefit those affected by the decision – governments have a lot to gain by being transparent.
Understanding the rationale for a decision is a necessary pre-condition for accepting that decision.
Transparency leads to trust and trust is essential if the Government wants to unlock efficiency gains from further use of automated decision-making.
Second, decision-level controls
Not every automated decision carries the same risk. Not many people have an issue with the automatic calculation and processing of their Medicare rebate after they visit the doctor.
Australians are concerned to hear that the level of support to be provided to an elderly parent will be calculated by an automated program.
Generally, Australians want human services to involve some humanity.
A legislated framework should require risk assessments before automated systems are deployed and stronger safeguards for decisions that are complex and impactful.
For those decisions, a human must be accountable and must have the power to review and override an automated outcome if needed.
Third, review and oversight
People must have the right to a timely review of high-risk automated decisions.
To ensure government departments and agencies are complying with all these new rules for automated decision-making, there must be properly resourced and empowered independent oversight.
Robodebt was unlawful and there was not enough oversight to identify and fix the problem.
This proposed framework would not slow government down. On the contrary, it would build public trust in process, giving departments confidence to use automation.
Most importantly, it would ensure that when governments automate decisions, they do so in a way that is lawful and fair.
Most automated decision-making is much more basic than artificial intelligence, but the same rules should apply to both.
As budgetary pressures drive a quest for greater efficiency in government services, there will be increasing opportunities to use artificial intelligence in government services.
We must put the right rules in place for all automated decision-making before these opportunities can be taken up.
The Federal Government did start – it undertook a community consultation about automated decision-making, which concluded more than a year ago. Now is the time to put a framework in place.
After Robodebt, we promised Australians it would never happen again. Without safeguards for automated decision-making in government, we risk breaking that promise.
Kate Chaney is the independent MP for the Western Australian federal seat of Curtin.
Want to see more stories from The New Daily in your Google search results?
Click here to set The New Daily as a preferred source.
Tick the box next to "The New Daily". That's it.