How AI-generated sexual images cause real harm, even though we know they are ‘fake’ |
Many women have experienced severe distress as Grok, the AI chatbot on social media site X, removed clothing from their images to show them in bikinis, in sexual positions or covered in blood and bruises. Grok, like other AI tools, has also reportedly been used to generate child sexual abuse material.
In response, the UK government has announced it will bring forward the implementation of a law, passed in June 2025, banning the creation of non-consensual AI-generated intimate images. Following bans in Malaysia and Indonesia, Grok has now been updated to no longer create sexualised images of real people in places where it is illegal, which will include the UK.
Read more: What can technology do to stop AI-generated sexualised images?
X’s owner, Elon Musk, has claimed the UK government wants “any excuse for censorship”. The media regulator, Ofcom, is also conducting an investigation into whether X’s activities broke UK law.
Some X users have minimised the harm these “undressed” and “nudified” images cause, describing them as “fake”, “fictional”, “very realistic art at most” and “no more real than a Tom & Jerry cartoon”.
You might think that AI-generated and edited images only cause harm through deception – fake images mislead us about real events. But how can images that everyone knows aren’t real cause harm?
The sexualised content of undressed images is not real, even if they are based on genuine photos. But these images are highly........