Australia’s national plan says existing laws are enough to regulate AI. This is false hope |
Earlier this month, Australia’s long-anticipated National AI Plan was released to a mixed reception.
The plan shifts away from the government’s previously promised mandatory AI safeguards. Instead, it’s positioned as a whole-of-government roadmap for building an “AI-enabled economy”.
The plan has raised alarm bells among experts for its lack of specificity, measurable targets, and clarity.
Globally, incidents of AI harm are growing. From major cyber crime breaches using deepfakes to disinformation campaigns fuelled by generative AI, the lack of accountability is staggering. In Australia, AI-generated child sexual abuse material is rapidly spreading, and existing laws are failing to protect victims.
Without dedicated AI regulation, Australia will leave the most vulnerable at risk of harm. But there are frameworks elsewhere in the world that we can learn from.
The new plan doesn’t mandate for a standalone AI Act. It also doesn’t have concrete recommendations for reforms to existing laws. Instead, it establishes an AI Safety Institute and other processes including