The Destructive Effects of Misinformation on the Human Brain
The brain's power of logic and reasoning are dependent on receiving correct information.
Thanks to the technology of the Internet and AI, misinformation is vastly increasing.
Pulitzer Prize-winning political cartoonist Michael Ramirez recently depicted three scientists huddled together in a medical lab for The Washington Post. The first looks up from a microscope and ominously states, “It’s the most dangerous pathogen we have come across.” The second scientist, bug-eyed, inquires, “Bubonic plague? Smallpox?” The third provides the answer, “Misinformation and conspiracy theories.”
Information can be thought of as a basic brain nutrient, much like the lungs needing oxygen. Misinformation works to undo the brain’s operation at all levels of functioning, from the molecular to the microscopic to the behavioral.
A millisecond delay in the speed of nerve impulses from the legs of a jogger can lead to a loss of balance with the potential for a severe or even fatal fall; a disturbance in the smooth interplay of neurotransmitters secondary to the introduction of a chemical, such as a psychedelic, may result in psychosis.
Cognitively and behaviorally, exposure to misinformation can convince a voter to believe something about a candidate that isn’t true, and subsequently cast a vote based on that misinformation. What’s most disturbing is the use of misinformation to invalidate our most basic perceptions.
Over evolutionary eons, the brain has learned to form linkages between seeing and believing: Except in highly unusual circumstances (optical illusions) or artificially drug-induced misperceptions, we can be reasonably certain that if we think we see a tiger in front of us, it’s really there.
But this tendency to believe that our perceptions always reflect the real world, that “seeing is believing,” puts the brain at a disadvantage when encountering misinformation by way of AI, which at the moment is capable of facilitating the formation and dissemination of misinformation on a broad scale.
Consider global warming, for instance. On September 28 of last year, President Trump went before the United Nations and labeled global warming—now almost universally accepted as real by scientists—a “con job.” Such a mega dose of misinformation has the potential to set back the battle against rising CO2 levels by decades. Further, once such misinformation becomes encoded in the brains of a sufficient number of people, social and political forces will arise (as they already have) to interfere with any efforts to combat or even acknowledge the risks associated with accepting as true something that is not.
Within days of the killing of two students at Brown University on December 13, 2025, the police identified a Palestinian student as a possible subject. This was swiftly followed by the spread of the student’s name with nearly 5,000 postings and reportedly 130,000 reposts throughout the internet. He was not the shooter.
When the university attempted to remove online references to the student, a Congresswoman, among others, suggested a cover-up. Only when the attacker was identified and found five days after the killings, did the Palestinian’s “unimaginable nightmare” of “non-stop” death threats and hate speech cease.
Perhaps the best explanation for this misinformation fiasco comes from Imran Ahmed, the chief executive of the Center for Countering Digital Hate. “The business model of social media rewards those whose content spreads widely, encouraging more sensational or provocative content. We’re no longer in control of our information ecosystem.”
Nor is the difficulty in distinguishing the real from the fake limited to the present. AI can provide a phony version of the past by altering photos or introducing characters who never existed into a specific historical context. George Orwell presciently anticipated this in 1984, a Comrade Ogilvy “who had recently died in battle under heroic circumstances.” But although no such person as Comrade Ogilvy ever existed, “a few lines of print and a couple of faked photographs would soon bring him into existence.”
Currently, AI, particularly Sora 2, would have no difficulty creating on demand a photo along with detailed biographical information about Comrade Ogilvy or any non-existent person. In doing so, altering the past would affect both the present and the future. Orwell described the process in 1984, “Who controls the past controls the future. Who controls the present controls the past.”
In all instances, if information is tainted, by accident or design, the brain is limited in its power to come up with solutions to the current worldwide challenges facing us in the twenty-first century, which include, among others, global warming, new and incapacitating diseases like Covid, the future of the internet and artificial intelligence, and widespread surveillance.
None of the challenges facing the twenty-first-century brain has any chance of solution in the absence of reliable information. Most threatening, as misinformation continues to mount, important cognitive processes such as reasoning, managing reliable information, and reaching valid conclusions are at peril.
It’s no exaggeration to say that misinformation, by altering the accuracy and reliability of our thought processes, threatens our survival as thinking-reasoning creatures.
Anderson, Janna, and Lee Raine, "The Future of Truth and Misinformation Online." Pew Research Center, October 19, 2017.
Health Information and Misinformation: A Framework to Guide Research and Practice. Journal of Medical Internet Research. June 2023. I. Fridman et al.
Combating Misinformation by Sharing the Truth: A Study on the Spread of Fact-Checks on Social Media. Information Systems Frontiers. June 2022. J. Li et al.
Also see: The 21st Century Brain: How Our Brains Are Changing in Response to the Challenges of Social Networks, AI, Climate Change, and Stress.
