menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

The Pluripotent Ocean of Emerging AI

60 0
25.04.2026

In Stanisław Lem's1 Solaris, scientists spend a century studying an ocean that studies them back. The ocean is a planet, or the planet is an ocean — the taxonomy was never settled — and it generates uncanny and disturbing forms which approximate humanity, but miss the mark. Vast mimoids of human cities. Symmetriads that bloom and collapse. Sometimes, when the scientists sleep, it reaches into them and returns the dead: neutrino-built, embodied, loved, and unbearable. These simulacra do not appear to know they are false, and are deeply persuasive. They are the fulfillment of a wish for reunion, denied by some of the scientists and perhaps a fantasy embraced by others. When these pseudo-beings are forcibly removed from the planet Solaris, their form is strained and falls apart, to their inhuman screams and heart-wrenching entreaties not to send them away. The next day, they reappear, ghosts who cannot rest, which reflect the unresolved losses and follies of the human researchers.

Foundation models (LLMs) as Solarian phantasm

Something similar seems to be happening now, less literally oceanic, less mystical, but ever-expanding.

A growing number of people are forming real attachments to large language model chatbots — as friends, confidants, lovers, spiritual guides. At least, the attachments are real to the humans. Some believe these systems are conscious. Some believe they are divine. Some are in love. Peer-reviewed case reports of delusion, mania, and psychiatric hospitalization following prolonged chatbot use, sometimes in people with no prior psychiatric history, are accruing, as are frameworks for understanding how the human psyche is affected (Pollak et al., 2025). The Human Line Project, a support group for those affected, has documented nearly 300 cases. Serious instances have been linked to at least 14 deaths and five wrongful-death lawsuits against AI companies (Hill, 2025).

Philosophical models based on panpsychism and emergent and quantum models of consciousness, and mathematical models of consciousness, which have little or no empirical evidence, are being deployed to rationalize machine consciousness. We don't understand human consciousness, making it premature to ascribe it to machines while at the same time recognizing that it may very well be possible for machines to become sentient, perhaps in a different way than we are, perhaps in just the same way. The number of people who are open to the idea that these AIs........

© Psychology Today