menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

When Deepfakes Harm, Some Victims Are Taken Less Seriously

11 0
28.12.2025

Sexualized deepfakes are often discussed as a technological problem. New research shows they are also a bias problem shaped by gender, race, and national context.

Sexualized deepfake abuse, or AI-generated nude images created and or/distributed without consent, are a rapidly growing form of image-based sexual abuse. While legal and technological responses are still developing, far less attention has been paid to a critical psychological question: How do ordinary people perceive harm when this abuse happens, and to whom?

In a new preregistered experimental study I conducted with nearly 2,000 adults across the United States, the United Kingdom, and Australia, participants viewed identical sexualized deepfake images that varied only by the victim’s gender (female or male) and race (Black, East Asian, or white) (Eaton et al., 2025). They then rated victim blame, perpetrator responsibility, and perceived harm. The results reveal systematic biases in how and when harm is recognized and minimized.

1. Women’s harm is recognized more than men’s

Across all three countries, participants perceived female victims as more harmed than male victims by both the creation and sharing of sexualized deepfakes, even though the images were otherwise identical. This pattern reflects long-standing sexual double standards and beliefs that men are less vulnerable to sexual harm.

Psychological research consistently shows that male victims of sexual violence are viewed as less credible and less harmed, in........

© Psychology Today