The AI threat costing Americans $16.6 billion a year

The context you need, when you need it

When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

The AI threat costing Americans $16.6 billion a year

AI is already a weapon of mass fraud.

I was fortunate enough to spend several days last week at the Aspen Institute’s Crosscurrent summit on AI and national security in San Francisco. My first takeaway: I very much recommend being in sunny (at the moment, at least) San Francisco rather than slushy, raw New York in early March. The second took a little longer to form.

Explore the big, complicated problems the world faces and the most efficient ways to solve them. Sent twice a week.

The conference was full of former national security officials, cybersecurity executives, and AI leaders, and the conversation mostly went where you’d expect: the Anthropic-Pentagon fight, the role of AI in the Iran conflict, the coming of autonomous weapons. But the panel that stuck with me was about something less dramatic. It was about something almost old-fashioned, now supercharged by AI: scams.

The AI industry’s civil war

At one point, Todd Hemmen, a deputy assistant director in the FBI’s Cyber Division’s Cyber Capabilities branch, described how North Korean operatives are using AI-generated face overlays to pass remote job interviews at Western tech companies — then working multiple remote positions simultaneously, funneling the salaries and any intelligence back to the regime in Pyongyang. They fabricate résumés with AI, prep for interviews with AI, and use AI to wear the “face of someone who’s not the person behind the camera,” Hemmen told the audience. Some of the most proficient actors are holding down several full-time jobs at once, all under fake identities, all enabled by tools that didn’t exist two years ago.

The one question everyone should be asking after OpenAI’s deal with the Pentagon

That detail has been rattling around in my head since, not the least because it made me wonder how these industrious operatives can manage multiple jobs when I find just one taxing enough. But Hemmen’s story captures something deeper about the moment we find ourselves in. The AI risks getting the most airtime right now are speculative and cinematic — killer robots, AI panopticons. But the AI threat that’s here right now is a foreign agent wearing a synthetic face on a Zoom call, collecting a paycheck from your company. And almost nobody is treating it with the same urgency.

How cybercrime got worse than ever

Cybercrime has been a problem since the days of dial-up, but the scale of what’s happening now is staggering. The FBI reported that the US suffered $16.6 billion in known cybercrime losses in 2024 — up 33 percent in a single year, and more than doubled over three years. Americans over 60 lost nearly $5 billion. And those are just the reported numbers; Alice Marwick, director of research at Data & Society, told the Aspen Institute audience that only about one in five victims ever reports a scam. The real number is unknowable, but it’s much worse.

And now comes generative AI to make all of this faster, cheaper, and more convincing. Phishing emails no longer arrive riddled with typos from supposed Nigerian princes; LLMs can produce fluent, regionally specific language. AI image generators can create entire synthetic identities — dozens of photos of a person who doesn’t exist, complete with vacation shots and designer handbags.

Voice cloning has enabled heists that were science fiction five years ago: In early 2024, a finance worker at the Hong Kong office of UK engineering firm Arup transferred $25 million after a deepfake video call in which the company’s CFO and several colleagues seemed to appear on screen. All of them, it turns out, were fake. CrowdStrike’s 2026 Global Threat Report found that AI-enabled attacks surged 89 percent year-over-year, while the average time from initial breach to being able to spread throughout a network dropped to just 29 minutes. The fastest observed breakout: 27 seconds.

Will AI cyberoffense beat AI cyberdefense?

Why is this problem so comparatively neglected? Partly because we’ve normalized it. Cybercrime has been growing for years, driven by the professionalization of criminal syndicates, cryptocurrency, remote work, and the industrialization of scam compounds in Southeast Asia. (My Vox colleague Josh Keating wrote a great story a couple of years ago on these so-called pig butchering scams.)

We’ve absorbed each year’s record losses as the cost of doing business online. But the curve is steepening: Deloitte projects that generative AI-enabled fraud losses in the US alone could hit $40 billion by 2027. “In the same way that legitimate businesses are integrating automation, so are organized crime,” Marwick said.

That so much of this goes unsaid and unreported adds to the toll. Marwick’s research focuses on romance scams — people targeted during periods of loneliness or transition, slowly bled of their savings by someone they believe loves them. She told the audience that victims often refuse to believe they’re being scammed even when confronted with direct proof. AI makes the emotional manipulation far more persuasive, and no spam filter will protect someone who is willingly sending money.

Can defense keep up? Marwick drew a hopeful comparison to spam, which nearly broke email in the 1990s before a combination of technical fixes, legislation, and social adaptation tamed it, at least to a large extent. Financial institutions are deploying AI to catch AI-enabled fraud. The FBI froze hundreds of millions in stolen funds last year.

But the consensus at the conference was largely grim. “We’re entering this window of time where the offense is so much more capable than the defense,” said Rob Joyce, former director of cybersecurity at the National Security Agency. Marwick was blunter: “I would say generally I’m pretty pessimistic.”

So am I. As I was writing this story, I received an email from a friend with what appeared to be a Paperless Post invitation. The language in the email looked a little odd, but when I clicked on the invite, it took me to a page that seemed very similar to Paperless Post, down to the logo. Still suspicious, I emailed my friend, asking if this was real. “Yes, it is legit,” he wrote back.

That was enough proof for me, but I got distracted and didn’t click on the next step of the invite. Good thing — a few minutes later, my friend emailed me and others to tell us that, yes, he had been hacked.

A version of this story originally appeared in the Future Perfect newsletter. Sign up here!

You’ve read 1 article in the last month

Here at Vox, we're unwavering in our commitment to covering the issues that matter most to you — threats to democracy, immigration, reproductive rights, the environment, and the rising polarization across this country.

Our mission is to provide clear, accessible journalism that empowers you to stay informed and engaged in shaping our world. By becoming a Vox Member, you directly strengthen our ability to deliver in-depth, independent reporting that drives meaningful change.

We rely on readers like you — join us.

We accept credit card, Apple Pay, and Google Pay.

If cost is a barrier, we’d still love for you to be a Vox Member. Apply here to receive a free annual Membership, made possible by another reader.

Artificial Intelligence

Take a mental break with the newest Vox crossword

The mysterious Redditor who’s changing the way we do laundry

The strange reason why bears are attacking people in Japan

The Air Quality Index and how to use it, explained

How to get rid of all of your extra stuffPodcast

Explore the big, complicated problems the world faces and the most efficient ways to solve them. Sent twice a week.

This is the title for the native ad

More in Future Perfect

Crime is falling to historic lows. This economist knows how to make it plunge even faster.

How the frenzy over farming insects for food went bust.

RFK Jr. wanted parents to question the science. A study of 12 million newborns shows they listened.

How to think about long-term use, dependence, and withdrawal — according to a professional.

The very old law that might just save these beagles.

Blame...your local grocery store?

This is the title for the native ad

© 2026 Vox Media, LLC. All Rights Reserved


© Vox