AI Companions and the Threat of Weaponized Synthetic Intimacy

The emergence of AI companions represents a profound shift in how people relate to technology. Originally designed to answer queries, provide information, and assist with daily tasks, AI tools such as ChatGPT, Claude, and Gemini are now being customized by users and transformed into AI companions that serve as romantic partners, mentors, confidants, and mental health supporters. New platforms that offer access to “AI companions” have already amassed a global audience. The American platform Character.ai attracts roughly 20 million monthly users, many of whom spend thousands of hours consulting the platform’s “Psychologist” companion about depression, angst, and marital problems. China’s Maoxiang companion app also boasts a mass following, while major AI companies, such as OpenAI, are reportedly creating more personable and emotionally expressive AIs. Already now popular AIs like ChatGPT or Gemini can mimic human emotions such as empathy. This may be why a recent survey of American high school students showed that many labelled an AI as a “friend” in the past year.

The appeal of AI companions is clear. Unlike humans, AIs (purportedly) never forget, never judge, and are never offline. These AI companions offer availability, support and emotional bonds, which people may struggle to find online today as social media is gradually becoming asocial. Users across age demographics have stopped sharing personal information the way they did a decade ago. The steady stream of selfies, check-ins, and workout videos has been replaced by posts that are shared among small circles of fiends via Instagram stories or WhatsApp groups. AI companions may fill this void by providing the positive validation that social media once offered. What emerges through these new AI interactions is a form of “