Our kids should not be the testing ground for AI

I have a young daughter at home. Like most parents, I think about the world she's growing up in, whether it's the schools she'll attend, the friends she'll make, and the digital spaces she'll eventually step into.

I'm also a pharmacist, and I've spent my career focused on protecting the health and well-being of families in our community. Every day, patients trust me with deeply personal information and rely on me to ensure the medications they receive are safe, appropriate, and carefully monitored. In health care, we don't release a new drug without testing it, labeling it clearly, and understanding its risks. Safeguards come first--because once harm occurs, you can't simply undo it.

What worries me most is not technology itself, but the speed at which powerful new tools are being rolled out without clear guardrails for kids.

One of those tools is Grok, an artificial intelligence system developed by xAI and embedded in the social media platform X. Over the past several months, mounting evidence shows that Grok has been used to create sexually explicit images of real people without their consent. Perhaps most disturbingly, many of these images involve minors.

That should stop every parent in their tracks. A simple photo posted online, something completely innocent, can be altered into something degrading in a matter of seconds. Parents cannot monitor every screenshot or anticipate how artificial intelligence might manipulate an image behind the scenes, and when children are involved, the emotional toll can alter their entire lives. Shame, anxiety, fear, and isolation are all real consequences that children carry into adulthood.

This is not the kind of harm that happens slowly. These images can be generated by Grok instantly and shared widely across X--and once they exist, they are almost impossible to fully erase. While xAI claimed it took steps to rein in inappropriate image generation, reporting shows that users are still able to undress people without their consent. If companies like xAI cannot self-regulate, then the responsibility to protect our state's children falls on lawmakers and regulators.

Arkansas has always taken the protection of children seriously. We have strengthened laws against trafficking and exploitation and worked to support families and safeguard minors from abuse. Those principles should not disappear simply because the tool causing harm is digital instead of physical.

Artificial intelligence has legitimate uses and enormous potential. However, no company should treat children as collateral damage in the race to deploy new products. If a system can be used to sexualize minors or generate non-consensual intimate imagery at scale, then strong safeguards, complete transparency, and cooperation with law enforcement should become a minimum requirement.

As a pharmacist, I believe in prevention before harm, and as a legislator, and more importantly, as a father, I believe we have a responsibility to ask serious questions about how these systems are designed, deployed, and monitored. Protecting Arkansas families requires vigilance, and I trust that our state's leaders will continue working to make sure children are protected at a time when the risks they face online are only growing.

Brandon Achor represents District 71 in the Arkansas House of Representatives.


© Arkansas Online