Eugene Lee

In my writing, I cover a diverse range of topics. At times, I focus on pressing events, while at other times, I opt for a broader perspective. This piece will adopt the latter approach. I am currently engaged in a project concerning artificial intelligence, teaching a course on social issues arising from the use of AI. The more I explore the topic, the more an acute concern intensifies in my mind: the issue of AI. Specifically, my thoughts center on the regulations governing AI deployment and application.

It may be a bit technical, but here is a necessary explanation of what that actually is. Generative AI is a new generation of AI systems built on large language models (LLM). LLMs are pre-trained language models that can create human-like responses in text, pictures, sound and even videos. ChatGPT, Gemini, Claude and many others are already widely known to the public, and these are drastically changing the rules of the game for many.

We, as humans, have been through a number of technological revolutions since the invention of the wheel. However, this revolution is different in many respects, and one that really stands out is AI creativity. In fact, it has already outpaced humans. It is just a matter of time before we give this AI autonomy. Some fear we are about to open a “Pandora’s Box” and will end human civilization. I personally think we are still far from that, but we need to worry because AI is nothing else but our reflection, with all our flaws and shortcomings. AI can harm people and it actually already did on a smaller scale, as people harm each other.

The worry is neither a speculation nor an exaggeration. Some have voiced concerns openly and demanded more control over the use of AI. For example, demonstrations of scriptwriters and Hollywood entertainment industry workers last summer were precisely about that.

It was just the beginning. In fact, there are discussions over the limits of the use of AI that should go even further, including among artisans, plasterers, masons, electricians and even plumbers. My hunch is that we will see more such disruptions on a different scale across many industries in many countries.

One possibility is the entertainment industry in Korea, including K-pop. In March, the Jeju government “hired” an “AI anchor” to run its news online, as it cited a lack of young applicants for the job. We haven’t heard much from the broadcasting industry yet, as many see that as a novelty, but some worry that one day, we will be fed the news by those AI anchors. This example is visible, literally. But behind the scenes, AI is slowly creeping into many industries, like finance and banking. On the one hand, it offers a great opportunity to improve service to the public; on the other hand, it takes away the jobs of some people. The key reason is the lack of regulation.

The AI Act, or a legislative package that would regulate the extent of AI deployment, is still under development. Currently, we can only believe the companies “in good faith” that they actually do “good” while deploying it. We know AI can discriminate, and the only legislature that is currently regulating the situation is the Digital Act, an extension of IT laws from the previous decade. This AI Act is a fragmented package of seven IT laws with many “gaps and holes” that only in practice will show its relevance once it is accepted. But even that won’t save the public from AI malice. Then what would be the best solution?

My answer to that would be a code of AI ethics for public and private companies involved in the deployment and supervision of AI. As we push the boundaries of AI applications, the law will do only as much. We need everyone, including users, to accept norms of behavior similar to those that we all have in our daily lives. There are things that we do not do, even if the law doesn’t specifically prohibit that behavior. The same should apply to the use of AI, we must rely on our culture and traditions to create good norms of conduct when we are using it. By “all,” I mean public, government agencies, businesses, supervising entities and immediate technical personnel handling AI systems.

As these norms are already being promoted globally, yet another movement is taking hold as well — one that promotes liberty and challenges all postulates of tradition — uncensored AI. Not only can you download LLMs and run them, but you can also do things without limits. That opens an opportunity for AI to be used maliciously. And once again, the only way to deal with that is through promoting ethics. Banning any type of AI will sweep the problem under the rug, not solve it. We all must work together to make things work for everyone. Ultimately, we must think hard in trying to imagine the society we want — an AI utopia or something else — it is hard to say, but any solution must engage everyone proactively so as not to stifle progress and innovation, which might have wider implications for the whole society.

But even if it does so, my worry remains that AI will actually rob us of something bigger — our hopes and dreams. In the words of one of my students, who once mentioned that he felt discouraged while using AI because it was always better and smarter than him, and no matter how hard he tried, he would never be able to beat AI.

At times, he said he felt helpless and that he would never be able to achieve anything because there would be those who had access to the technology and the ability to use it. And, because he wasn’t as smart as them, he would always be at a disadvantage. If that sounds somewhat nihilistic, then it should. We all have access to generative AI now, and ethics should help us if we use it early on, as it always has.

Eugene Lee (mreulee@gmail.com) is a lecturing professor at the Graduate School of Governance at Sungkyunkwan University in Seoul. Specializing in international relations and governance, his research and teaching focus on national and regional security, international development, government policies and Northeast and Central Asia.

QOSHE - AI and the need for ethics - Eugene Lee
menu_open
Columnists Actual . Favourites . Archive
We use cookies to provide some features and experiences in QOSHE

More information  .  Close
Aa Aa Aa
- A +

AI and the need for ethics

27 0
26.05.2024

Eugene Lee

In my writing, I cover a diverse range of topics. At times, I focus on pressing events, while at other times, I opt for a broader perspective. This piece will adopt the latter approach. I am currently engaged in a project concerning artificial intelligence, teaching a course on social issues arising from the use of AI. The more I explore the topic, the more an acute concern intensifies in my mind: the issue of AI. Specifically, my thoughts center on the regulations governing AI deployment and application.

It may be a bit technical, but here is a necessary explanation of what that actually is. Generative AI is a new generation of AI systems built on large language models (LLM). LLMs are pre-trained language models that can create human-like responses in text, pictures, sound and even videos. ChatGPT, Gemini, Claude and many others are already widely known to the public, and these are drastically changing the rules of the game for many.

We, as humans, have been through a number of technological revolutions since the invention of the wheel. However, this revolution is different in many respects, and one that really stands out is AI creativity. In fact, it has already outpaced humans. It is just a matter of time before we give this AI autonomy. Some fear we are about to open a “Pandora’s Box” and will end human civilization. I personally think we are still far from that, but we need to worry because AI is nothing else but our reflection, with all our flaws and shortcomings. AI can harm........

© The Korea Times


Get it on Google Play