chyung Eun-ju

OpenAI's recent introduction of Sora, a generative AI tool, marks a significant leap into an era where detailed minute-long videos can materialize from basic text prompts. Sam Altman demonstrated Sora’s ability to make videos based on text prompts on X, formerly known as Twitter. This cutting-edge technology, however, has raised eyebrows in a world already grappling with manipulated media. Evidently, we live in a world where there is a vast amount of manipulated media, and now, with advancements like Sora, the potential for manipulation has increased even further. As of now, Sora is accessible exclusively to experts tasked with identifying potential issues within the model.

The sheer quality of this tool is truly remarkable, pushing the boundaries of what we thought was achievable. But in a landscape where fake media often gains viral traction, the potential for influence is undeniable.

Joel Cho

In Slovakia, there was a designated quiet period where media coverage of the election ceased, in order to allow people to think independently and make informed decisions without undue influence. But an audio snippet featuring a progressive party purchasing votes from the Roma minority emerged and spread rapidly on social media, and the party's candidate ended up losing the election. Elections are influenced by various factors, and it is difficult to measure the amount of impact the audio had on the candidate’s loss, but the potential implications of generative AI on elections are quite concerning.

In the face of these advancements, ethical and societal concerns take front stage. Elections, an integral part of the democratic process, become susceptible to manipulations facilitated by AI generative tools such as DALL?E and Sora. OpenAI has acknowledged such potential misuse, evident in their rules explicitly prohibiting the creation of custom GPTs for political campaigning or lobbying. The ban also extends to the development of chatbots posing as real individuals, a preventive measure against the dissemination of deceptive information.

However, the effectiveness of these rules remains dependent on strict enforcement and oversight. With the potency of generative AI technology, especially in the creation of lifelike videos such as the ones we have seen Sora is capable of producing, there is an urgent need for comprehensive regulations to safeguard the integrity of democratic processes, especially considering the upcoming Korean general elections. The upcoming parliamentary elections are even more unpredictable than previous elections with the current political disarray. Voters have lost faith in President Yoon Suk Yeol and the opposition Democratic Party of Korea (DPK) for their respective shortcomings. Yoon and his party are facing backlash for ineffective and authoritarian governance, while the DPK is being criticized for its alleged abuse of power.

A couple days ago, a deepfake video titled “President Yoon’s virtual confession of conscience” circulated on social media. In the video, Yoon discusses his incompetence and how he has ruined Korea. In response, the Korea Communications Standards Commission convened an emergency communication review subcommittee to address the issue, as such a deepfake could cause social unrest.

Starting with this election, the use of deepfake technology for election campaigns has been prohibited, and the National Election Commission has cracked down on over 100 cases. The use of deepfakes for election campaigns is prohibited for 90 days from the election date. Violation can result in imprisonment for up to seven years or fine ranging from 10 million won ($7,500) to 50 million won. Even if the subject of a deepfake has consented, its use will be punished, regardless of whether its content is factual or not. This amendment was in response to a fabricated video last year depicting President Yoon as appearing to endorse then-Namhae County Mayor candidate Park Young-il.

Naver, Kakao and other platforms intend to attach “labels” to content created by AI to prevent harm caused by false information using AI ahead of elections. Kakao plans on introducing an invisible watermarking technology to its AI generation model, Kalio. The watermark will not be visible to ordinary users, but it will make it possible to determine what content was generated by Kalio.

However, the extent to how far people will go with generative AI to swing elections is unknown. Despite regulations, there are platforms and messaging services that may be difficult to regulate. For instance, Google and Meta have policies regarding content made by AI, but X is not really enforcing much. Telegram which has a loose content moderation policy may be used a lot during the election, and people could share deepfakes in the messaging service. We will have to see how much elections will be influenced by synthetic media.

Beyond the realm of politics, the implications of such technology extend to the individual level too, with potential threats ranging from misinformation to harassment. Sora's capability to generate realistic videos based on textual input introduces a new dimension to the existing — and already extremely problematic — challenges posed by manipulated media. Concerns arise relating to the potential use of generative AI for malicious purposes.

As most of us already know, deepfakes have been a tool for harassment of individuals and with more advanced AI tools becoming broadly available, this form of personal violation has proliferated on the internet. So although OpenAI's commitment to preventing misuse is commendable, constant safeguarding and adaptive regulatory frameworks are needed to address these inherent risks and emerging threats.

During a congressional hearing in May 2023 where OpenAI CEO Sam Altman testified, Senator Richard Blumenthal showcased the potential risks of AI systems by using a demonstration of his AI-generated voice, prompting skepticism among U.S. politicians about the ability of tech companies to control their powerful AI systems.

Generative AI tools like Sora present both remarkable advancements and profound ethical dilemmas. Such technologies make it difficult to differentiate fact from fiction, and pose potential threats to the democratic process. Regulations and monitoring will be essential to mitigate the misuse of AI technology, but also fostering digital literacy and promoting awareness about the capabilities of generative AI will be important to empower people to discern truth from manipulation.

Chyung Eun-ju (ejchyung@snu.ac.kr) is a marketing analyst at Career Step. She received a bachelor's degree in business from Seoul National University and a master's in marketing from Seoul National University. Joel Cho (joelywcho@gmail.com) is a practicing lawyer specializing in IP and digital law.

QOSHE - How Sora could affect politics - Chyung Eun-Ju And Joel Cho
menu_open
Columnists Actual . Favourites . Archive
We use cookies to provide some features and experiences in QOSHE

More information  .  Close
Aa Aa Aa
- A +

How Sora could affect politics

15 0
10.03.2024

chyung Eun-ju

OpenAI's recent introduction of Sora, a generative AI tool, marks a significant leap into an era where detailed minute-long videos can materialize from basic text prompts. Sam Altman demonstrated Sora’s ability to make videos based on text prompts on X, formerly known as Twitter. This cutting-edge technology, however, has raised eyebrows in a world already grappling with manipulated media. Evidently, we live in a world where there is a vast amount of manipulated media, and now, with advancements like Sora, the potential for manipulation has increased even further. As of now, Sora is accessible exclusively to experts tasked with identifying potential issues within the model.

The sheer quality of this tool is truly remarkable, pushing the boundaries of what we thought was achievable. But in a landscape where fake media often gains viral traction, the potential for influence is undeniable.

Joel Cho

In Slovakia, there was a designated quiet period where media coverage of the election ceased, in order to allow people to think independently and make informed decisions without undue influence. But an audio snippet featuring a progressive party purchasing votes from the Roma minority emerged and spread rapidly on social media, and the party's candidate ended up losing the election. Elections are influenced by various factors, and it is difficult to measure the amount of impact the audio had on the candidate’s loss, but the potential implications of generative AI on elections are quite concerning.

In the face of these advancements, ethical and societal concerns take front stage. Elections, an integral part of the democratic process, become susceptible to manipulations........

© The Korea Times


Get it on Google Play