menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

Cognitive Oversight: When AI Forgets the Human Mind

16 0
yesterday

OpenAI's ambitious policy paper never once mentions cognition.

Intelligence can be sold, but the act of thinking cannot.

You can redistribute wealth, but you can't redistribute a mind.

I read OpenAI's new policy blueprint a few times. Released just this week, the thirteen-page document is titled Industrial Policy for the Intelligence Age: Ideas to Keep People First—and it arrives with real ambition. Robot taxes. Public wealth funds. A four-day workweek. Automatic safety-net triggers. It's a bold attempt by one of the world's most powerful AI companies to shape the policy conversation around superintelligence before governments do it for them. And on every page, the same tagline: Ideas to Keep People First.

By the end I was more confused than resolved. OK, the ideas weren't bad, but something felt out of place. Then I found it—in the title itself. OpenAI's name for this moment: the Intelligence Age.

I've been writing about this moment for years, and I've always called it something different. The Cognitive Age. The distinction isn't simply a matter of preference. It's the whole argument.

Intelligence, as the document uses the word, is a product. OpenAI's CEO has said as much publicly—describing a future in which intelligence becomes a utility, metered and sold like electricity or water. Once intelligence enters a commodity framework, something basic shifts. It gets priced, tiered, throttled, and advertised. And we humans consuming it may begin to relate to our own minds differently—less as the source of thought and more as the customer of cognitive support.

Cognition is different. It isn't a product, it's a process. It's built through effort and friction and the very human experience of stepping into the unknown. That's the gap at the center of this otherwise ambitious paper.

OpenAI is solving for the economy. But who's solving for the mind?

In The Borrowed Mind, I argued that the most consequential thing AI does isn't economic—it's cognitive. When we outsource thinking to AI, we stop being the authors of our conclusions and become the recipients of them. I called this borrowed certainty. It accumulates gradually and by the time you notice it, it's structural. OpenAI's paper proves the point without meaning to.

The paper presents solutions for critical issues such as income, stability, and distribution. And those matter. But take a closer look and something is glaringly absent. What happens to human agency, judgment, and authorship when the cognitive process itself is progressively outsourced? You can't redistribute your way to an answer.

The word "cognition" doesn't appear once in thirteen pages. Neither does "cognitive." It's my sense that this isn't an oversight, it's a worldview.

A Familiar Story in New Language

This isn't the first time we've heard this kind of thinking. The abundance gospel—from Diamandis's radical optimism to Musk's civilizational ambition—has shaped how commercial tech thinks about risk for years. The promise is always distributional. The costs are always somewhere else.

Remember the internet in the late 1990s? It was going to change the world, flatten power, set us all free. The promise was real enough to believe. What nobody modeled was human nature—the bad actors who moved fast, the platforms that found engagement more profitable than truth, the cognitive and social costs that accumulated while the infrastructure became too embedded to fix.

I don't think it's fair to completely point the finger at OpenAI and Sam Altman's. It may be more related to human inability to see or not see what's ahead. Nevertheless, OpenAI is running the same abundance logic, turbocharged. The disruption is so vast, so significant, that we need something on the scale of the New Deal. And the abundance argument has been amplified to the point where it preemptively absorbs its own criticism. Here's a wealth fund. Here's a shorter workweek. Here's the answer. We've thought of it.

However, cognition doesn't fit neatly into the abundance narrative. You can distribute intelligence. You can price it, tier it, and deliver it on demand. But you can't distribute the act of thinking itself. I've written before about the abundance bubble—the idea that too much, too easy, can hollow out the very faculties it claims to serve. The danger isn't that AI withholds thought from us. It's that it gives us so much that we simply stop doing it ourselves.

Conservation Vocabulary

The language of the document is revealing. "Resilient institutions." "Resilient safety nets." "Resilient society." Resilience is a defensive posture—the right word for a retaining wall, the wrong aspiration for a civilization navigating something genuinely unprecedented.

OpenAI is simultaneously claiming to build the most disruptive technology in human history while proposing a framework built around stability and continuity. Keep. Stabilize. Share. Protect. The language feels defensive. That's a strange posture for a company whose product, by its own account, is about to change everything.

What the paper misses, at least to me, is this: thinking comes first. As we think, so we act. As we act, so we become. Cognition isn't the background humming beneath another policy white paper. It is the ground. The self that receives the wealth fund, votes on the policy, navigates the shorter workweek. That self is constituted by our cognitive life. And if that cognitive life is being externalized to AI that compute faster and more fluently than we do, we may already be changing.

There's something more concerning here. If cognition is progressively outsourced, identity may follow. The self isn't a fixed container that holds its contents regardless of how thinking happens. It manifests through the act of thinking itself. Externalize enough of that process and what you have isn't a person who thinks differently. It's a person who is, on some fundamental level, different.

What you have is something that may defy easy description: a distributed self. That's the change this paper doesn't name.

The Question That Doesn't Appear

"Keep people first" isn't a cynical idea. It may even be genuinely meant. But when the policy conversation is being shaped primarily by those with the most to gain from a particular vision of the future, the questions that don't get asked matter as much as the ones that do.

The document tells us a great deal about how to distribute the gains of OpenAI's Intelligence Age. It says nothing about what happens to the humans doing the thinking—or whether they still are.

In thirteen pages about intelligence, the mind never once appears.

There was a problem adding your email address. Please try again.

By submitting your information you agree to the Psychology Today Terms & Conditions and Privacy Policy


© Psychology Today