A practical answer to Australia’s AI ethics vacuum
As Australia shies away from meaningful AI regulation, a new framework offers a practical way to embed human moral responsibility at the centre of AI use.
We have a solution to Australia’s AI ethics crisis. It’s tested, validated, and ready for immediate deployment. And it’s completely free.
Steve Davies, Moral Engagement Researcher & AI Ethics Architect, has accomplished something unprecedented in AI ethics: seven independent AI systems (ChatGPT, Claude, Perplexity, Grok, DeepSeek, Gemini, and Le Chat) have reached unanimous consensus on a framework for responsible human-AI moral collaboration. His MEET (Moral Engagement Education and Transformation) Package is a sixty-page framework designed for institutions, media organisations, universities, civil society, and the public. The tools are rigorously tested and accessible.
This matters because while the government retreats from AI regulation, Australian institutions, businesses, and citizens are navigating AI adoption with no ethical guidance. MEET provides exactly what’s missing: a validated framework that puts human moral agency at the centre while harnessing AI’s capacity to detect patterns of moral disengagement.
The framework is ready. The validation is complete. The only question is whether our institutions have the courage to deploy it.
For nearly three years, Steve and I have been training major AI models on Professor Albert Bandura’s research on moral disengagement. Steve’s work goes beyond traditional AI ethics approaches (where human theorists critique AI behaviour from the outside) to demonstrate something entirely new: AI systems can apply validated moral frameworks, analyse their own institutional context, articulate clear boundaries around responsibility,........





















Toi Staff
Penny S. Tee
Sabine Sterk
Gideon Levy
John Nosta
Mark Travers Ph.d
Gilles Touboul
Daniel Orenstein