menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

Professor Mary Aiken: Grok is a warning shot, not a one-off scandal

11 1
friday

LAST UPDATE | 1 hr ago

HAVING WORKED IN cyber safety for almost two decades, I have often heard colleagues say that if the public truly understood the horrors of child sexual abuse imagery, there would be a widespread outcry. We have now reached that moment.

The public response to Grok’s ‘nudification’ of ordinary photos and the generation of sexualised imagery of real people has been intense and justified. But if we treat this as a single product scandal, we will miss what it is really telling us, which is that the gap between technological capability and governance has widened into a chasm.

When a consumer-facing tool can be repurposed, at scale, into a harassment engine in minutes, harm is no longer an edge case. It is a predictable outcome of deploying powerful systems without sufficiently robust guardrails, accountability and enforcement pathways. 

Ireland already has strong laws against image-based abuse. Sharing an intimate image without consent is criminal. Threatening to do so is criminal. Producing or distributing child sexual abuse material (CSAM) is criminal.

However, online harm has become a “big-data” problem, vast in volume, instantaneous in spread and endlessly variable in form. This avalanche of harmful content has now outpaced the capacity of police forces worldwide, leaving automated detection and intervention systems no longer a choice, but a necessity.

The Grok controversy, and the broader ‘nudification’ trend, mark the industrialisation of sexual harm: it is fast, scalable, cross-border, and often driven by anonymous or pseudonymous accounts, making investigation difficult. Restricting one tool leads to the emergence of others. When platforms tighten access, content shifts elsewhere. When one jurisdiction enforces penalties, offenders evade them. This is not just a “bad app” issue – it is a global governance challenge.

Nudification apps are trained on vast image datasets. It has been estimated that 99% of sexually explicit deepfakes depict women and girls. Importantly, most jurisdictions are now attempting to tackle AI-generated sexual imagery of children, yet paradoxically permit the continued deployment of AI models capable of producing it. Focusing solely on Grok is not the point – it may be today’s headline, but it is not the only system capable of producing sexualised, degrading or illegal imagery.

The wider ecosystem includes mainstream generative AI tools, smaller “nudify” services, and open-source models that can be modified, redistributed, or subjected to jailbreak techniques that bypass built-in safety controls and can prompt an AI system to generate content it is designed to refuse. Regulation that chases one brand name at a time........

© TheJournal