Grok fallout: Tech giants must be held accountable for technology-assisted gender-based violence

The new image and video editing feature for xAI’s chatbot, Grok, has generated thousands of non-consensual, sexually explicit images of women and minors since Grok announced the editing feature on Christmas Eve. It was promoted as enabling the addition of Santa Claus to photos.

The growing ease of perpetrating sexual violence with novel technologies reflects the urgent need for tech companies and policymakers to prioritize AI safety and regulation.

I am a PhD candidate in public health. My research has largely focused on the intersection of gender-based violence and health, previously working on teams that leverage AI as a tool to support survivors of violence. The potential and actual harms of AI on a such a wide scale require new regulations that will protect the health of mass populations.

Concern about sexually explicit “deepfakes” has been publicly debated for some time now. In 2018, the public heard that Reddit threads profiled machine learning tools being used to face-swap celebrities like Taylor Swift onto pornographic material.

Read more: Taylor Swift deepfakes: new technologies have long been weaponised against women. The solution involves us all

Other AI-powered programs for “nudifying” could be found in niche corners of the internet. Now, this technology is easily accessible at anyone’s fingertips.

Grok can be accessed either through its website and app or on the social media platform, X. Some users have noted that when prompted to create pornographic images, Grok says it’s programmed not to do this,