Study Results Identify Perils of AI |
My recent book, Disconnection: Identity Development in a Digital Age, devotes an entire chapter to the perils of AI. One of the arguments I make is that technological innovations flooding the market by Big Tech are advancing far more rapidly than our ability to study them as social scientists.
Unfortunately, the rigors that come with the scientific method mean that research inquiries into the impacts of technologies on users take longer to determine than the pace at which these corporations expose their products to consumers. By the time findings demonstrate negative impacts to users, consumers have often already embraced the technologies, making it that much harder to get individuals to modify their consumption. I wish that the results of a recent set of studies had been available when I was doing research for my book, as they substantiate many of the warnings I identified in that chapter.
Specifically, two studies, one published in Nature, the other in Science, demonstrated that participants’ preferences for actual political candidates could be changed significantly after having an exchange with a chatbot that had been programmed to “persuade” the user towards a specific candidate (as reported by Kozlov, 2025). In the study published in Nature, Lin et. al. (2025) demonstrated “significant treatment effects on candidate preferences” based on exchanges with a chatbot, more than the persuasive impact of traditional advertising.
Not only were the political opinions of consumers susceptible........