Generative AI model shows fake news has a greater influence on elections when released at a steady pace without interruption

It’s not at all clear that disinformation has, to date, swung an election that would otherwise have gone another way. But there is a strong sense that it has had a significant impact, nonetheless.

With AI now being used to create highly believable fake videos and to spread disinformation more efficiently, we are right to be concerned that fake news could change the course of an election in the not-too-distant future.

To assess the threat, and to respond appropriately, we need a better sense of how damaging the problem could be. In physical or biological sciences, we would test a hypothesis of this nature by repeating an experiment many times.

But this is much harder in social sciences because it’s often not possible to repeat experiments. If you want to know the impact of a certain strategy on, say, an upcoming election, you cannot re-run the election a million times to compare what happens when the strategy is implemented and when it is not implemented.

You could call this a one-history problem: there is only one history to follow. You cannot unwind the clock to study the effects of counterfactual scenarios.

To overcome this difficulty, a generative model becomes handy because it can create many histories. A generative model is a mathematical model for the root cause of an observed event, along with a guiding principle that tells you in which way the cause (input) turns into an observed event (output).

By modelling the cause and applying the principle, it can generate many histories, and hence statistics needed to study different scenarios. This, in........

© The Conversation