Artificial intelligence (AI) is transforming every aspect of our society, from health care to entertainment to education. But what about politics? How will AI affect the way we elect our leaders and hold them accountable?

It poses a serious challenge to democracy, especially in the context of the 2024 elections. AI language models are powerful tools that can be used to manipulate voters, spread misinformation, and undermine elections.

It is imperative to understand that AI language models are not neutral or objective. They reflect the biases and ideologies of the data they are trained on, and the people who design and use them. AI-powered chatbots pose a significant threat by expediting the spread of false political information ahead of the 2024 U.S. elections. It enables anyone to create and spread political content without the need for technical expertise, raising concerns about the spread of false narratives and misinformation. Similarly, the use of AI in political campaigns will allow for instant responses, precise audience targeting, and democratization of disinformation, potentially influencing the outcome of elections.

AI can be used to influence voters through various techniques. These include creating fake narratives, videos, or voice clones of candidates to undermine opposition messaging; spread of misinformation via social media; robocalls; fake videos; and exploitation of emotions and beliefs in order to sway voting decisions.

Furthermore, AI can target specific demographics with tailored messages, amplifying divisive propaganda and affecting public discourse. It can also generate fake audio of trusted figures to lend credibility to misleading messages and create AI personas to manipulate specific audiences’ views subtly which makes the spread of disinformation both more effective and harder to detect.

A recent study published by the Springer Journal of Public Choice investigated the political bias of ChatGPT. When it was asked to answer ideological questions while impersonating someone from a given side of the political spectrum, it presented a strong alignment with the responses associated with the Democrats, it indicated a clear bias towards the left side of the political spectrum.

This was further supported by additional robustness tests, such as the placebo test, which verified that ChatGPT's bias was not merely due to spurious relationships with the chosen categories. The evidence from the study suggests that ChatGPT supports Democrats side, indicating a clear political bias.

Researchers at the Massachusetts Institute of Technology (MIT) found that OpenAI’s GPT models advance left-wing answers, leaning left-wing libertarian. The models are biased because they are trained on data that contains political biases. The study highlights that this bias can be influenced by the sources of data used for training, such as books, internet texts, news media, and social media from right- and left-leaning sources. The training data can reinforce the models' existing biases, leading them to favor a specific side. The article highlights that these biases can also be introduced as tech companies update their data sets and training methods.

How can we protect democracy from the dangers of AI? There is no easy or definitive answer, but there are some possible steps we can take. First, we need to raise awareness and educate the public about the potential and perils of AI, and how to spot and verify AI-generated content. Second, we need to develop and enforce ethical and legal frameworks to regulate the use of AI in politics, and to hold accountable those who misuse or abuse it.

A recent study highlights the importance of identifying and neutralizing false content and proposed developing an "AI iron dome" that can identify and counter false AI-generated content to safeguard elections. They argue that private companies, rather than politicized agencies or government regulatory bodies, should lead the development of AI tools to safeguard elections in order to avoid abuse and promote innovation.

Companies integrating AI models should be aware of the biases and strive to make their models fairer. They need to introduce transparency to the training methods and data used for AI models.

As we approach the 2024 elections, we need to be vigilant and proactive, to ensure that AI serves and does not subvert democracy.

Zia Muhammad is a cybersecurity researcher at North Dakota State University's Department of Cybersecurity. His opinions do not necessarily reflect those of the university or any other institution.

QOSHE - AI’s pro-Democrat bias could swing the 2024 elections - Zia Muhammad, Opinion Contributor
menu_open
Columnists Actual . Favourites . Archive
We use cookies to provide some features and experiences in QOSHE

More information  .  Close
Aa Aa Aa
- A +

AI’s pro-Democrat bias could swing the 2024 elections

6 21
23.02.2024

Artificial intelligence (AI) is transforming every aspect of our society, from health care to entertainment to education. But what about politics? How will AI affect the way we elect our leaders and hold them accountable?

It poses a serious challenge to democracy, especially in the context of the 2024 elections. AI language models are powerful tools that can be used to manipulate voters, spread misinformation, and undermine elections.

It is imperative to understand that AI language models are not neutral or objective. They reflect the biases and ideologies of the data they are trained on, and the people who design and use them. AI-powered chatbots pose a significant threat by expediting the spread of false political information ahead of the 2024 U.S. elections. It enables anyone to create and spread political content without the need for technical expertise, raising concerns about the spread of false narratives and misinformation. Similarly, the use of AI in political campaigns will allow for instant responses, precise audience targeting, and democratization of disinformation, potentially influencing the outcome of elections.

........

© The Hill


Get it on Google Play