The world’s AI and data infrastructure is about to get the mother of all stress tests — and the stakes couldn’t be higher. We’re in a massive election year, which means this year will also bring countless efforts to use AI tools for political ends — to educate and empower voters, yes, but also to warp political discourse, damage democracies, and skew election results.

We’re already seeing the warning signs. In New Hampshire, primary voters received AI-generated calls in which President Joe Biden told them to stay home on primary day. A super PAC was found to be using AI to create a deepfake version of Democratic challenger Dean Phillips. Elsewhere in the world, politicians from London to Lahore are now the subject of deepfaked audio and video, and experts say the concept of truth itself is now being destabilized by ubiquitous GenAI technologies.

In response, House Democrats are calling for new rules to curb AI-generated robocalls. Taiwanese lawmakers, fearing Chinese interference in their elections, passed new rules banning people from sharing deepfaked content. In India, lawmakers have threatened to revoke safe-harbor rules for social networks that allow the circulation of deepfaked content.

We’re also seeing tech companies—including OpenAI, Meta and Google—unveil their plans for minimizing AI tool abuse.

That’s all well and good, but it won’t be enough to keep our elections safe—unless the various countermeasures now being discussed are accompanied by a serious attempt to protect consumers’ data rights. Efforts to flag deepfakes or govern their circulation will fall flat unless our efforts at AI regulation are accompanied by robust measures to give people real agency, and the power to determine how their data is used and what kinds of data-driven digital content they experience.

Nutritional labels are not enough

To see why that matters, consider some of the steps being taken by OpenAI to keep our elections safe. First, the company is working with other AI firms to create digital credentials for AI-generated images helping to track the materials used to generate them. Next, it’s testing "provenance classifier" technologies that can spot AI-generated images. Finally, it’s working to bring more up-to-date news content into ChatGPT’s source data, helping to ensure that generated content is relatively up-to-date.

Again, that sounds great—but it also sounds insufficient. Much of what OpenAI is proposing boils down to the implementation of nutritional labels for AI-generated content, allowing people to understand what they’re viewing and see how it was created. That isn’t bad, but telling people what went into the content they’re being shown is just one piece of a bigger puzzle.

Look at it this way: a nutritional label tells you how the sausage got made, but that doesn’t do the pig much good. When it comes to GenAI, we aren’t just consumers: we’re part of the product. That means passively acknowledging how data was used isn’t enough—we need real agency over what’s being done with our data.

That’s important even when we’re talking about a deepfaked politician because we’re increasingly moving into a world where micro-targeting will be used in conjunction with AI technologies to create and serve up hyper-persuasive content that’s tailored to individuals and precisely calibrated for maximum impact. According to one European government report, the fusion of data-driven micro-targeting and deepfake technologies will become “a weapon of mass division,” vastly amplifying the impact of GenAI technologies on global democracy.

A question of trust

So what’s the solution? Well, we need transparency and reliable tagging or watermarking to show when content is AI-generated. But as we’ve learned in the data privacy space, we shouldn’t expect consumers to pay much attention to disclosures and nutritional labels. The reality is that we can’t expect people to read the small print to figure out whether content was faked by an AI model, nor whether their own data is being used to turbo-charge such efforts.

Instead, we need to take steps to put consumers in control of how they interact with AI models and AI-generated content. A user might want to say: never use my likeness in an AI image generator, and never use my data to tailor the GenAI content that I’m shown. They shouldn’t have to articulate those preferences each and every time they view an image or a video; instead, they should be able to state their preferences once and have them reflected across their entire digital universe.

Ultimately, this boils down to trust—and unless we’re providing people with real control and agency over how their data is used and where it flows, there’s simply no reason for them to trust either AI technologies or the organizations that create and operate them. That’s especially corrosive in the political arena—it’s where the idea that truth itself is unknowable creeps in, and it quickly leads to cynicism and political disengagement. But it’s part of a broader problem that impacts every part of the AI economy and of the broader data economy from which it’s built.

Trust is the fragile but powerful product of data systems that proactively align how data is used with how people want it to be used. It is built up when we give people the information and tools to make meaningful decisions about their data—but squandered just as easily if we don’t effectively enforce those decisions.

If we want to protect our democracy—or enable AI to evolve safely in countless other areas of our lives—then we need to start by giving people clarity, and real agency over the data they share.

Jonathan Joseph leads solutions at Ketch, a platform for programmatic privacy, governance, and security, and an expert in privacy legislation.

QOSHE - Global democracy has a data problem - Jonathan Joseph, Opinion Contributor
menu_open
Columnists Actual . Favourites . Archive
We use cookies to provide some features and experiences in QOSHE

More information  .  Close
Aa Aa Aa
- A +

Global democracy has a data problem

7 0
10.02.2024

The world’s AI and data infrastructure is about to get the mother of all stress tests — and the stakes couldn’t be higher. We’re in a massive election year, which means this year will also bring countless efforts to use AI tools for political ends — to educate and empower voters, yes, but also to warp political discourse, damage democracies, and skew election results.

We’re already seeing the warning signs. In New Hampshire, primary voters received AI-generated calls in which President Joe Biden told them to stay home on primary day. A super PAC was found to be using AI to create a deepfake version of Democratic challenger Dean Phillips. Elsewhere in the world, politicians from London to Lahore are now the subject of deepfaked audio and video, and experts say the concept of truth itself is now being destabilized by ubiquitous GenAI technologies.

In response, House Democrats are calling for new rules to curb AI-generated robocalls. Taiwanese lawmakers, fearing Chinese interference in their elections, passed new rules banning people from sharing deepfaked content. In India, lawmakers have threatened to revoke safe-harbor rules for social networks that allow the circulation of deepfaked content.

We’re also seeing tech companies—including OpenAI, Meta and Google—unveil their plans for minimizing AI tool abuse.

That’s all well and good, but it won’t be enough to keep........

© The Hill


Get it on Google Play