menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

How to Resist Losing a Bit of “Me” in the Era of LLMs

16 0
yesterday

LLM edit smoothen the edge of writing, eliminating both peak and mistakes, resulting in a generic product.

The brain and AI are both algorithmic prediction machine. It uses shortcuts.

A key question is, “Am I using this to enhance my cognition or to avoid cognition?”

AI is everywhere. How did it change your daily life?

It has certainly changed the way neuroscience research is done; the high computing power is helping gene discovery, neuro-imaging analysis, or drug development. And now, it is inserting its tentacles into my clinic. When I prescribe a medication, drug-drug interactions can be checked instantly. With thousands of drugs and infinite number of combinations, it is impossible to memorize them all. Similarly, Large language model (LLM) embedded in the electronical medical chart can summarize my clinic notes, and my charting time has significantly dropped, a positive outcome.

Recently, after drafting a work email, I could not help but note that the tone of my e-mail was a bit confrontational. So I asked LLM to edit this, and voila, it “smoothened” the language, for a more collaborative and professional tone.

This “smoothen” is the key to LLM edits. If my writing has peaks of originality, and valleys of mistakes, it tries to eliminate both. Removing the mistakes is welcome, but originality and mistakes are two sides of the same coin; it is something inherently unique and individual, originating from me. In other words, LLM removes a bit of “me”. This is not a bad thing; communication is reciprocal, and we need to be understood. In business and professional setting, LLM has the potential to standardize and exchange information more efficiently.

But that comes at a cost. Psychology Today contributor Dr. Ocklenburg recently reviewed a study of computer programs, finding that while the group who used AI was faster at task, they scored lower on the quiz, indicating that relying heavily on AI may reduce skill formation. Dr. Walther similarly cautions that efficiency gained with AI erodes underlying expertise and agency.

This “generification” was evident when I recently reviewed a series of student essays. None of the essays had a glaring error and the basic standard of writing has clearly been elevated. I can still remember many years ago, when I came across a scientific manuscript that was so difficult to understand; clearly not written by an English native and I had to ask for a revision, not because of the science but because of poor writing. LLM editing has the potential to democratize professional writing and spread ideas better.

But now, we are all aware of LLM/AI algorithmic approach, and we can smell it’s influence. Some of the essays had no personality. The words were nice and every sentence made sense. It said all the right things, but lacked the spark. I could not hear their voices nor feel their blood and sweat. In some other essays, smooth though the grammar was, I saw the unique individual shine through. There was an unmistakable quirk to it (I am not advising you to ask AI to add a quirk to your essay).

LLM was modelled after the human brain; it is an algorithmic prediction machine. AI has more computing power and does the statistics faster, and better, but our brain does the same thing: predict what is going to happen next, filling the gap as best as it could.

For example, if I wrote, “for breakfast, I ate……”, the next word could be a pancake, a yogurt, an omelet, but usually, it will not be “a bicycle”. Both our brain and LLM choses the next word, based on a prediction model.

By the way, this is one of the tricks we use at the bedside to evaluate if a patient has aphasia. We would ask, “did you eat bicycle for breakfast?” and see if patient vehemently shakes their head (comprehension intact), or just smiles (receptive aphasia).

We may be able to command LLM to write like science fiction, Faulkner, or create a poem, ignoring grammar, but this does not escape modeling and prediction, after being trained on a pattern.

If the LLM and human brain operates based on a similar logic, it is also conceivable that the human brain can start to mimic LLM. Becoming generic with less variance, losing some of our quirks and edges. Overtime, because we are only exposed to smooth outputs, our norm may start to recalibrate, narrowing the prior. Our tolerance for eccentric roughness may decline and our neural circuits involved in diverse expression may weaken. The fact that LLM spits out an edit so quickly, we may lose the ability to sit with uncertainty, and the pain of disorienting ourselves in our own edits.

The best use I found for LLM when I write, is as a research assistant as well as a teacher. When I asked LLM to review a draft (not this one), as an editor and give me critique of my writing, this was the answer:

“For your rewriting, think:

Does the opening create emotional urgency?

Does the chapter end by pulling the reader forward?”

Extremely helpful feedback, making me understand what shortcomings my writing had, and gave me things to work harder on.

In a recent Atlantic article, “The Problem with Using AI in Your Personal Life”, Dan Brooks, offers a great rule of thumb.

-It should not take more effort for the recipients of the text to read than the originating writer to write it.

We are now in an era where we need to use metacognition even for AI.

LLM has potential to deepen our thoughts if we are willing, but also make us a bit lazy if we don't pause.

Next time, before using AI, ask yourself,

“Am I using this to enhance my cognition or to avoid cognition?”

- Did I try first before asking?

-Did I critically evaluate the output?

-Did it expand my thoughts? Did it give me counterarguments? Alternative points of view?

Shen, J. H., & Tamkin, A. (2026). How AI Impacts Skill Formation. arXiv preprint arXiv:2601.20245.

Gommers JJJ, Verboom SD, Duvivier KM, van Rooden CJ, van Raamt AF, Houwers JB, Naafs DB, Duijm LEM, Eckstein MP, Abbey CK, Broeders MJM, Sechopoulos I. Influence of AI Decision Support on Radiologists' Performance and Visual Search in Screening Mammography. Radiology. 2025 Jul;316(1):e243688. doi: 10.1148/radiol.243688. PMID: 40626877; PMCID: PMC12314766.


© Psychology Today