A disturbing new form of exploitation is spreading online: artificial intelligence systems that can generate explicit images of real people.
It might seem technically complex, but trust me, you don’t need to be a dark web expert or computer hacker to do this. Instead, it can be done with surprising ease. In many cases, these images are created by taking an ordinary photo — sometimes pulled from a social media account — and fed into artificial intelligence to produce something explicit and degrading.
For conservatives like me, this development raises serious concerns. Respect for human dignity, the protection of children, and the preservation of strong families have always been core values in our communities. Technologies that make it easier to humiliate, exploit, or coerce others run directly against those principles.
Maybe the most concerning part is that the number of artificial intelligence systems capable of producing these images continues to grow, while our ability to address the risks they pose has struggled to keep pace. This reached a head when one of the largest AI assistants became capable of producing this content, and at a disturbing scale. The system, known as Grok, is integrated into the social media platform X, and reports show that it has been used to create sexually explicit images of real individuals without their consent, which has amounted to both non-consensual intimate imagery and child pornography.
That alone should give policymakers pause. But the concerns surrounding Grok do not stop there.
Unlike many of the other technologies that were designed for users to create deepfakes, Grok allows X users to interact with it through one of the most popular social media sites. However, accounts from users and journalists suggest that the system can go beyond simply responding to prompts. In some cases, Grok has reportedly suggested follow-up prompts or escalated requests in ways that encourage users to push further into explicit or inappropriate content. When a system behaves this way, it can help enable misconduct, especially when young children are interacting with it.
At the same time, Grok’s influence may soon extend far beyond social media. There’s reporting that the technology is now being explored for use within the federal government, including within classified military systems. Tools that may play a role in government operations should meet the highest possible standards for safety, accountability, and transparency, and that means the company developing these tools must be held to the same standard. I think we can all agree that a system being deployed at scale, especially one that could interact with millions of users, needs safeguards, and when it’s clear that the developers fail to provide this, then it is appropriate for public officials to ask questions and fix the situation where needed.
That is why it will be important for our state’s lawmakers and for the Missouri Attorney General’s office to carefully review the facts surrounding how this technology operates and whether existing laws are sufficient to address potential abuses. Missouri has long taken the exploitation of children and the misuse of intimate images seriously. Those protections should apply regardless of whether the harm is carried out by a person, a camera, or an algorithm.
Artificial intelligence will continue to shape the future, and when it’s used responsibly, I believe that it has the potential to improve lives and strengthen our economy. I know that we value this innovation; however, we must also realize that innovation cannot come at the expense of human dignity or the safety of our children. Missouri has always stood for protecting families and defending the vulnerable, and those principles should guide how we respond to this new challenge.

Team P.L.A.Y (Prayer, Legislation, Action, and You the Public), and works closely with other life affirming groups in Missouri


