Opinion | When AI Platforms Blame Users, Who Is Responsible For The Harm?
Grok’s response to illegal and abusive content points to a larger problem. Artifiicial intelligence (AI) companies want the power of generation without the responsibility that comes with it.
The argument over harmful content online is old. Platforms have long claimed they are not responsible for what appears on their services. That fight has mostly revolved around users. Generative AI breaks that logic. Elon Musk’s handling of Grok makes that clear. Over the past few weeks, Musk and X have repeated the same defence.
If Grok produces illegal or abusive content, the fault lies with the user who prompted it, not the system or the company that built it. The claim echoes the logic social media companies once used to avoid responsibility for user posts. That logic does not survive contact with generative AI. AI systems do not merely host speech; they produce it and do so at scale, using data, rules, and limits chosen by the company behind them. Treating that output as if it were ordinary user speech is a convenient fiction.
Grok was marketed as a looser, less restricted chatbot, and fewer guardrails were part of the pitch. When the system began generating content that crossed legal and ethical lines, the response was predictable. The system was not redesigned, the........
