When Patterns Look Like Thought |
A recent paper from the University of Wisconsin and the "Paradigms of Intelligence" Team at Google makes a fascinating claim. Large language models can extract and reconstruct meaning even when much of the semantic content of a sentence has been replaced with nonsense. Strip away the words, keep the structure, and the system still often knows what is being said. Freaky, right?
The authors call this the “unreasonable effectiveness of pattern matching," harkening to Eugene Wigner’s essay on mathematics. To me, it points toward something even more interesting—that much of what we experience as understanding may be recoverable from form alone, without the words that we traditionally link to meaning.
On its surface, this looks like another triumph for AI. Underneath, it may be evidence for something more philosophically disruptive—the emergence of what I have called anti-intelligence.
In the experiments, content words are replaced by invented tokens while grammatical structure is preserved. A human reader sees gibberish. The model, however, often reconstructs the original meaning with surprising accuracy. The structural components, such as syntax and........