Why Most People Are Using AI Wrong
Effective AI use depends more on thinking style than technical skill.
Strong users anticipate what AI needs before they prompt.
AI works best as a thinking partner, not a replacement.
Treating outputs as drafts helps sharpen critical thinking.
Some people get shockingly useful, nuanced responses from AI models such as ChatGPT, Claude, or Gemini. Meanwhile, others get generic, surface-level output and assume the tool is limited or even useless to them. They aren't using it "wrong" in a technical sense, but in a way that limits what AI can actually do.
Why does the same tool produce such different results depending on the user?
The difference isn’t technical skill, but rather how people are thinking about the system they’re interacting with.
What effective AI users are doing is a form of system perspective-taking, a kind of "as-if" theory of mind applied to a non-human system.
The Hidden Skill: System Perspective-Taking
To be clear, AI doesn’t have a mind, but people often interact with it as if it does. The users who get the most out of it use system perspective-taking to anticipate what the system needs in order to produce a useful response before they ever type the prompt. In practice, this means recognizing what’s missing and supplying it: relevant context, constraints, goals, and assumptions.
It also means understanding the system’s limitations. AI doesn’t have real-world grounding and generates responses based on probabilities. As a result, vague inputs tend to produce generic, high-likelihood outputs. Rather than treating it like a one-shot search engine, effective users implicitly simulate a back-and-forth dialogue, refining their inputs as if they were briefing a........
