menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

Nobody Carries AI's Thinking With Affection

18 0
yesterday

A great teacher gives you a distinctive way of thinking. AI gives everyone the same one.

Epistemic humility requires knowing the edges of your own knowledge.

AI anchors thought before independent reasoning begins.

An Intellectual Inheritance

A biology professor recently wrote to me after reading How AI is Colonizing the Mind and mentioned something she noticed in her own thinking. Decades after her first undergraduate course, she still catches herself reasoning in ways that feel like her professor, Kevin. Not quoting him. Not summarizing his lectures. Something deeper. A mode of thinking and a way of approaching biological questions that she internalized through years of developing that relationship. She described it with a phrase that I think beautifully relays what it means to be human. She said parts of her thinking still "feel like Kevin's." She carries that influence with affection.

This is a flawless example of what human-mediated learning can produce. Not just knowledge, but a distinctive intellectual inheritance. Kevin's way of seeing biology became part of how she sees biology, influenced by his perspective, his emphasis, his particular way of valuing certain ideas over others. And because most biology students never had Kevin, her inheritance is genuinely unique. It differentiates her thinking from someone who learned the same material through a different mentor.

AI does not produce intellectual inheritance. It produces intellectual convergence. Let's explore why intellectual diversity is necessary as AI use compels reasoning towards a statistical average.

Epistemic humility is the recognition that your understanding is incomplete. It sounds like: "I might be missing something here." Or, "Maybe there’s a perspective I didn’t consider." Epistemic humility is a skill, and like any skill, it is built through practice. I believe these types of questions are the foundation of genuine inquiry.

You develop epistemic humility by encountering differing opinions, beliefs, and evidence. You begin to realize that all knowledge contains something valid. Maybe your first interpretation was wrong. Maybe struggling through an idea you didn’t initially understand helps you realize how much more there is to learn. This becomes a benefit when engaging in deep conversation or disagreement with another person (or an AI system).

When you ask an AI to synthesize the causes of the French Revolution, you get a single coherent narrative. You do not get the experience of reading three historians who have different interpretations and having to figure out why. You do not develop the awareness that your perspective is one perspective among many, because the AI has already resolved the tension for you. The result is the inability to recognize what you are missing. You don't know enough to doubt.

Kevin gave one biology student a distinctive way of thinking that she carries decades later. A different professor gave a different student a different inheritance. Those student's incorporated their own lived experiences and ways of knowing and thinking and became distinct from their mentors. The field of biology is richer because this exists.

When a million students ask an AI to explain natural selection, they all receive roughly the same synthesis. The same structure. The same emphasis. The same resolution of tensions. Nobody walks away from that interaction with an intellectual inheritance or a meaningful relationship that shaped them. Nobody carries it with affection.

Research in Science Advances showed AI can support individuals in creative tasks, but the outputs begin to converge. The statistical mean gets stronger, but the variance disappears. A large-scale comparison of divergent creativity in humans and LLMs found that AI produces more creative writing than the average human, but cannot reach the output of highly creative individuals in the outliers. By relying on LLMs extensively, the ceiling of creativity compresses toward the average.

Homogenization towards consensus has been a problem in academia long before AI. In Alzheimer's research, the amyloid hypothesis crowded out alternative explanations and received the majority of funding for decades. The funding and institutional support created legitimacy for the hypothesis. But the hypothesis was recently revealed to be rooted in significant fraud. This is why intellectual diversity is so important. Competing ideas and thoughts are supposed to push back on consensus. If at the time, AI had been trained on the amyloid hypothesis, it would have reinforced that dominant hypothesis further instead of pushing back against it.

A society that replaces the struggle of independent reasoning with the frictionless acceptance of consensus output will not become smarter. It becomes more uniform. It will lose the outliers. It will lose the Kevins and the students who carried them forward.

The biologist who wrote to me closed her message with a line from Vonnegut:

"We are what we pretend to be, so we must be careful what we pretend to be."

"We are what we pretend to be, so we must be careful what we pretend to be."

A generation practicing the verification of AI outputs is not practicing thinking. They are practicing agreement with a machine. And they are becoming what they practice.

Bellemare-Pépin, A., Lespinasse, F., Thölke, P., Harel, Y., Mathewson, K., Olson, J. A., Bengio, Y., & Jerbi, K. (2026). Divergent creativity in humans and large language models. Scientific Reports, 16(1), Article 1279.

Doshi, A. R., & Hauser, O. P. (2024). Generative AI enhances individual creativity but reduces the collective diversity of novel content. Science Advances, 10(28), Article eadn5290.

Piller, C. (2025, February 11). How the "amyloid mafia" took over Alzheimer's research. STAT.


© Psychology Today