The public backlash persists over the fake, sexually explicit images of Taylor Swift that recently circulated online. But regulating those AI-generated images may pose more challenges than many realize, as only 10 states have directly outlawed some form of the practice amid the rapid growth of artificial intelligence.

READ:

The pornographic and digitally altered images of the superstar singer-songwriter and musician spread last week on social media websites such as X, formerly Twitter. The images were deepfakes, a relatively new term for a seemingly realistic but manipulated video or photo produced by a form of AI. The technology first appeared in late 2017, and subsequent code-sharing has made it more widely available for use, according to the nonprofit Organization for Social Media Safety.

“Part of the problem is that the technology has become less expensive, more accessible and the products of the technology more believable, while the laws and the protections have not evolved as quickly,” says Judith Germano, an adjunct professor and distinguished fellow at NYU School of Law. “So we need laws, standards, guardrails and political and public discourse that addresses this very serious problem, which has existed for years and seems to become more prevalent as a form of sexual exploitation, abuse, manipulation.”

There is currently no federal law against disseminating such content. However, some legal professionals believe “such illicit practices may not require new legislation, as they already fall under a patchwork of existing privacy, defamation or intellectual property laws,” according to an article by Law.com.

But the proliferation of such deepfakes is getting the attention of government officials – both nationally and at the state level. On Oct. 30, 2023, President Joe Biden issued an executive order directing “the establishment of new standards for AI safety and security” while also ordering the Office of Management and Budget to “consider the risks of deepfake image-based sexual abuse of adults and children” in its upcoming AI procurement guidelines.

READ:

Additionally, at least the 10 states below now have legislation that specifically targets those who create and share explicit deepfake content, as reported by USA Today. Many laws were passed in the last year or so, and outline penalties ranging from fines to possible jail time for offenders.

READ:

The attempts to combat deepfake practices are not perfect. Germano says some of the state laws require an “illicit motive,” which she notes could put a high burden on the victim to prove their case. States also need to discern, she says, whether their existing laws related to subjects such as privacy, harassment and defamation are enough, or “should be better defined or expanded.” Most states, for example, have specific laws against revenge porn, a form of cyber sexual harassment.

“I think that it’s a combination of the need for states to have better expertise in how to properly define deepfakes and what appropriate remedies are, so that they are developing laws that are broad enough to address the serious harms, but not stifling other less nefarious uses of the technology,” Germano says.

READ:

Despite the challenges that lawmakers and regulators face, she notes how important it is that the harmful practice is getting more notice.

“This is a pressing issue that states and the federal government need to discuss how best to address,” Germano says. “What happened to a celebrity as renowned and impactful as Taylor Swift brings the issue front of mind, but a seriously concerning problem is the many, many people who are victims of this kind of crime who lack the platform and resources to counter the damage that occurs from a reputational and emotional and financial standpoint.”

QOSHE - States That Ban AI Deepfake Porn - Elliott Davis Jr
menu_open
Columnists Actual . Favourites . Archive
We use cookies to provide some features and experiences in QOSHE

More information  .  Close
Aa Aa Aa
- A +

States That Ban AI Deepfake Porn

37 0
31.01.2024

The public backlash persists over the fake, sexually explicit images of Taylor Swift that recently circulated online. But regulating those AI-generated images may pose more challenges than many realize, as only 10 states have directly outlawed some form of the practice amid the rapid growth of artificial intelligence.

READ:

The pornographic and digitally altered images of the superstar singer-songwriter and musician spread last week on social media websites such as X, formerly Twitter. The images were deepfakes, a relatively new term for a seemingly realistic but manipulated video or photo produced by a form of AI. The technology first appeared in late 2017, and subsequent code-sharing has made it more widely available for use, according to the nonprofit Organization for Social Media Safety.

“Part of the problem is that the technology has become less expensive, more accessible and the products of the technology more believable,........

© U.S.News


Get it on Google Play