The new National AI Plan gets the balance wrong
Australia's new National AI Plan arrives at an important moment.
Login or signup to continue reading
Governments everywhere are introducing policies that encourage innovation while protecting the public from fast-moving and unpredictable risks.
The federal government promotes its AI plan as a national framework that will position the country for long-term prosperity. In practice, however, the plan places far too much confidence in uncertain economic forecasts and provides far too little certainty when it comes to safeguards.
Over the past two years, the government expressed its intention to introduce mandatory requirements for AI systems that could affect people's rights or wellbeing.
These requirements would have applied in areas such as recruitment, healthcare, policing, financial services and education. None of these commitments appear in the final plan. Instead, its centrepiece is an AI Safety Institute that will study risks and advise existing regulators but not have the authority to set or enforce rules that apply across sectors.
The plan assumes that current legal frameworks can be used to respond to harms as they occur. It encourages regulators to interpret their existing powers in order to address AI-related problems.
This approach might work for some well-known risks but it leaves the public exposed in situations where the law is silent or ambiguous.
Consider these examples: while the plan encourages transparency for AI systems, it does not require organisations to inform people when they are interacting with an automated agent rather than a human. This omission creates serious risks.
People may take guidance, reassurance, or financial advice from entities they believe are human, which erodes informed consent and increases the chance of manipulation. It also........





















Toi Staff
Penny S. Tee
Gideon Levy
Sabine Sterk
Mark Travers Ph.d
Gilles Touboul
John Nosta
Daniel Orenstein