X is facilitating nonconsensual sexual AI-generated images. The law – and society – must catch up
X (formerly Twitter) has become a site for the rapid spread of artificial intelligence-generated nonconsensual sexual images (also known as “deepfakes”).
Using the platform’s own built-in generative AI chatbot, Grok, users can edit images they upload through simple voice or text prompts.
Various media outlets have reported that users are using Grok to create sexualised images of identifiable individuals. These have been primarily of women, but also children. These images are openly visible to users on X.
Users are modifying existing photos to depict individuals as unclothed or in degrading sexual scenarios, often in direct response to their posts on the platform.
Reports say the platform is currently generating one nonconsensual sexualised deepfake image a minute. These images are being shared in an attempt to harass, demean or silence individuals.
A former partner of X owner Elon Musk, Ashley St Clair, said she felt “horrified and violated” after Grok was used to create fake sexualised images of her, including of when she was a child.
Here’s where the law stands on the creation and sharing of these images – and what needs to be done.
Creating or sharing nonconsensual, AI-generated sexualised images is a form of image-based sexual abuse.
In Australia, sharing (or threatening to share) nonconsensual sexualised images of adults, including AI-generated images, is a criminal offence under most Australian state, federal and........
