Does AI truly think like humans? Evidence suggests otherwise

Have you ever wonder how AI thinks and work? Because the way we talk or think about AI might be quietly convincing us it’s more human than it is.

People often call Artificial intelligence AI, things like “smart” or saying it “knows” something might sound harmless, but it can quietly mislead people about what AI actually does.

A new study shows that AI news writers are more careful than expected, rarely using strongly human-like language.Because when they do, it often falls on a spectrum,sometimes describing simple requirements, other times hinting at human traits.

"Think, 'know,' 'understand', or 'remember'.These are everyday words people use to describe what goes on in the human mind. But when those same terms are applied to AI , it can unintentionally make machines seem more human than they really are.

Jo Mackiewicz, professor of English at Iowa State said, "We use mental verbs all the time in our daily lives, so it makes sense that we might also use them when we talk about machines as it helps us relate to them"

"But at the same time, when we apply mental verbs to machines, there's also a risk of blurring the line between what humans and AI can do."

A research team that studied how writers describe AI using human-like language. This type of wording, known as anthropomorphism, assigns human traits to non-human systems.

"Certain anthropomorphic phrases may even stick in readers' minds and can potentially shape public perception of AI in unhelpful ways," Aune said.

How News Writers Actually Use AI Language:

To better understand how often this kind of language appears, the researchers analyzed the News on the Web (NOW) corpus.

This massive dataset contains more than 20 billion words from English-language news articles published in 20 countries.

They focused on how frequently mental verbs such as "learns," "means," and "knows" were used alongside terms like AI and ChatGPT.

The findings were unexpected as the study found that news writers do not frequently pair AI-related terms with mental verbs.

While anthropomorphism is common in everyday speech, it appears far less often in news writing.

"Anthropomorphism has been shown to be common in everyday speech, but we found there's far less usage in news writing," Mackiewicz said.

The findings highlight the importance of context. Simply counting words is not enough to understand how language shapes meaning.

"For writers, this nuance matters: the language we choose shapes how readers understand AI systems, their capabilities and the humans responsible for them," Mackiewicz said.

The research team also emphasized that these insights can help professionals think more carefully about how they describe AI in their work.

"Our findings can help technical and professional communication practitioners reflect on how they think about AI technologies as tools in their writing process and how they write about AI," the research team wrote in the published study.

As AI continues to develop, the way people talk about it will remain important. Mackiewicz and Aune said writers will need to stay mindful of how word choices influence perception.

Notably, the research materials were provided by Iowa State University and the study, "Anthropomorphizing Artificial Intelligence: A Corpus Study of Mental Verbs Used with AI and ChatGPT," was published in journal "Technical Communication Quarterly."


© The News International