How AI Hijacked the Venezuela Story
There’s a familiar story about AI disinformation that goes something like this: With the arrival of technology that can easily generate plausible images and videos of people and events, the public, unable to reliably tell real from fake media, is suddenly at far greater risk of being misled and disinformed. In the late 2010s, when face-swapping tools started to go mainstream, this was a common prediction about “deepfakes,” alongside a proliferation of nonconsensual nude images and highly targeted fraud. “Imagine a fake Senator Elizabeth Warren, virtually indistinguishable from the real thing, getting into a fistfight in a doctored video,” wrote the New York Times in 2019. “Or a fake President Trump doing the same. The technology capable of that trickery is edging closer to reality.”
Creating deepfakes with modern AI tools is now so trivial that the term feels like an anachronistic overdescription. Many of the obvious fears of generate-anything tech have already come true. Regular people — including children — are being stripped and re-dressed in AI-generated images and videos, a problem that has trickled from celebrities and public figures to the general public courtesy of LLM-based “nudification” tools as well as, recently, X’s in-house chatbot, Grok. Targeted and tailored fraud and identity theft are indeed skyrocketing, with scammers now able to mimic the voices and even faces of trusted parties quickly and at low cost.
The story of AI disinformation, though — an understandable if revealing fixation of the mainstream media beyond the LLM boom — turned out to be little bit fuzzier. It’s everywhere, of course: We no longer need to........

Toi Staff
Sabine Sterk
Gideon Levy
Mark Travers Ph.d
Waka Ikeda
Tarik Cyril Amar
Grant Arthur Gochin