The Mind We See, and the Mind We Imagine
I wasn’t expecting a conversation about single cells and cognition to explain why a large language model (LLM) feels like a person. But that’s exactly what happened when I listened to Michael Levin on the Lex Fridman Podcast. Levin wasn’t debating consciousness or speculating about artificial intelligence (AI). He was describing how living systems, from clusters of cells to complex organisms, cooperate and solve problems. The explanation was authoritative and grounded, but the implications push beyond biology.
At one point, Levin shared a slide outlining three ways to describe a mind. It presented a first-person, second-person, and third-person as a compact summary and revealed (to me) something important about why AI now feels intimate, conversational, and sometimes strangely alive.
Levin’s triad begins with the first-person view. This is the internal experience of being a system. Feelings, sensations, moods, and beliefs live here. They can be described, but they aren’t directly visible.
The second-person view emerges in relationships or interactions. It’s the space where © Psychology Today





















Toi Staff
Penny S. Tee
Gideon Levy
Sabine Sterk
Mark Travers Ph.d
Gilles Touboul
John Nosta
Daniel Orenstein