(NEXSTAR) – The internet is increasingly subjecting users to AI-generated images and text in advertisements, videos and social-media posts, whether many of us are aware of it or not.
But we really should be aware of it, in a perfect world.
Due to advancements in generative artificial intelligence (or generative AI, as it’s commonly called), it can become increasingly difficult to tell the difference between genuine and synthetic media. In many cases, these types of images and video are created and shared with no ill intent — but that’s not the case among the creators looking to mislead or misinform the public, or misuse AI for purposes of fraud or blackmail.
“All the media we have today, the majority, are in the digital format, and we rely on digital media to gain knowledge, information, on what is happening in the world,” Dr. Siwei Lyu, a professor at the State University of New York at Buffalo and the co-director of the school’s Center for Information Integrity, told Nexstar.
“If someone has the ability to manipulate or fabricate this media, and users cannot tell the difference … [it can] influence their decision-making, and by doing so, affect our lives and society and democracy,” Lyu said. “That’s a critical concern and threat to the well-being of everyone.”
FCC makes AI-generated voices in robocalls illegalEven with no malicious intent on behalf of the creator, the generative-AI industry is operating — and evolving — with little oversight.
“The rise of these big tech companies has created this interesting environment where anything, even faulty or if it has glitches, is put out there, because there is not a regulatory agency,” Amarda Shehu, the associate dean for AI Innovation at George Mason University’s College of Engineering and Computing, said. “The incentive to make money is so........