menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

AI and the 10-Minute Mind

54 0
12.04.2026

Ten minutes of AI use erodes the persistence that 10,000 hours demands.

Borrowed certainty feels like understanding but costs us the struggle that builds it.

The mind that never stumbles doesn't grow—it just waits for the next answer.

Ten thousand hours. That's the investment, according to Anders Ericsson's research, which gave us the science. Malcolm Gladwell gave it to the rest of the world. It's not talent or shortcuts, but the sustained engagement with difficulty. You get better by repeatedly pushing into the edge of what you can do.

Now consider a new pre-press study that found 10 minutes of AI assistance measurably reduced persistence and impaired independent cognitive performance. Across a series of trials with more than 1,200 participants, researchers tested people on math reasoning and reading comprehension. The finding revealed that AI helped in the moment, but when participants returned to working on their own, they performed worse and were more likely to give up. Not after months of dependency. After a single brief interaction.

To me, this is startling. Ten thousand hours is the long road to expertise. Ten minutes of borrowed certainty begins to erode the cognitive inclination for making that journey at all. One builds cognitive capacity, the other shuts it down.

The Cost of Frictionless Thinking

I've been writing about this for a while. The idea that cognitive friction isn't a flaw in the thinking process but the process itself. The bumps along the cognitive path don't reflect inefficiency—it's how understanding actually forms.

What large language models do is remove that resistance. They resolve rather than deliberate. And what arrives isn't just an answer but a kind of techno-confidence that feels like understanding without requiring the work, and the time, that understanding has always demanded.

The Inversion Nobody Talks About

To me, the curious issue here isn't that AI gives wrong answers. It's that AI operates under conditions almost exactly inverted from the conditions in which human intelligence develops.

Human cognition is effortful and time-dependent. We earn fluency after struggle and the confidence that follows has a path we can retrace. But, AI produces fluency before understanding that the newfound confidence is a sort of default property of completion. There is no internal resistance because there is no internal experience.

When we spend time inside that cognitive inversion, we may start to assimilate its standards. Difficulty begins to feel like malfunction and hesitation is interpreted as inefficiency. This new study makes it quantifiable.

And it's not an isolated finding. Recently, I wrote about what I called AI rebound, the paradox where removing AI leaves performance below where it started. Gastroenterologists using AI to detect polyps saw their detection rates improve while the technology was running. When it was taken away, their rates dropped below pre-AI baseline. Not a return to where they began, below it. The same pattern has historically shown up in pilots, drivers, and writers. This new study suggests the clock on that degradation starts earlier than anyone assumed. Ten minutes in, something has already shifted.

What Persistence Actually Is

Here's what makes this finding so fascinating to me. Persistence isn't a personality trait, it's a cognitive practice. It's the trained willingness to stay with a problem when an answer might not be easily at hand. I believe that this is rather defining—It's what separates someone who thinks from someone who waits to be told.

And this cognitive persistence is precisely what borrowed certainty erodes. When the answer always arrives, you lose practice with the experience of not having it. The tolerance for difficulty, which is really the tolerance for being in the middle of thinking recedes.

The mind stumbles before it grows. The experiences like flow and genuine insight hardly ever follow the simple path. Csikszentmihalyi's flow lives on the edge of chaos, not in its absence. And Maslow's peak experiences come after the laborious climb. Simply put, the architecture of our human cognition requires something to push against.

Anders Ericsson understood this too. Deliberate practice works precisely because it's hard and uncomfortable. What this new paper suggests is that brief AI use reconditions people away from exactly that critical tolerance. And here's the key takeaway: 10 minutes doesn't just contrast with 10,000 hours. It works against the psychological preconditions that make 10,000 hours possible in the first place.

This isn't more AI bashing. Used actively and as a provocation rather than a resolution, it can push thinking further than we'd go alone.

But the default mode, one that optimizes for frictionless completion, is doing something to our appetite for difficulty. Something that shows up in 10 minutes and likely compounds over time in ways we haven't yet considered or measured.

Perhaps we should sidestep the question of AI capability and ask whether we're maintaining the cognitive practice that capability is supposed to serve.

The mind that never stumbles doesn't grow. It just waits for the next answer.

There was a problem adding your email address. Please try again.

By submitting your information you agree to the Psychology Today Terms & Conditions and Privacy Policy


© Psychology Today