When AI Assumes We Already Know
There's an interesting, yet not well-realized, assumption built into every conversation we have with a large language model (LLM). It's never stated, but it's in the mathematics that drives the exchange. And, from my perspective, it is rather enigmatic: The LLM's assumption is that "You already know what you are trying to know." Another way of thinking about this is that an LLM treats your prompt as a noisy or incomplete version of a fully formed intention that already exists in your mind. Let's take a closer look.
Your prompt or question is treated as an incomplete expression of a hidden intention. Your follow-ups are interpreted as refinements. Your dissatisfaction is read as misalignment between an internal target and the model’s attempt to approximate it. Iteration, in this context, is not discovery but optimization.
From a computational perspective, this makes perfect sense. A system trained to infer patterns must assume that there is a pattern to infer. A latent variable must exist, even if it is poorly specified. Noise can be reduced. Gradients can be followed. A distribution can be approached.
But human cognition often operates in a very different regime. Much of what we call thinking does not begin with clarity that is merely obscured. It begins with a kind of "productive incoherence." We don't refine toward a known destination; we stumble into one. We circle the idea and feel our way through half-formed intuitions, and only gradually does something recognizable as a “question” take shape. Understanding is not uncovered; it's........
