Your AI sycophant will see you now
A gaming company CEO asked his lawyers if he could avoid a payout of upwards of $250 million to the studio he had acquired. They told him the plan would trigger lawsuits. He asked an AI chatbot the same question. It gave him a step-by-step playbook. He followed the chatbot. A Delaware court recently ruled that he breached the contract.
His lawyers told him no. The chatbot told him yes. That is the difference between a professional and a machine. And it is the difference our AI policy has so far ignored.
Reclaiming Affordability: A housing agenda that will move women forward
Legacy media justifies Iranian war crimes with careless reporting
The NIH has problems. Carelessly slashing its budget would create more
A new study in Science by researchers at Stanford and Carnegie Mellon put a name to why. They tested 11 leading AI systems and found that every one exhibited what they call social sycophancy. The models did not just produce errors. They affirmed the user. Actions, perspective, self-image. They told people their choices made sense about 50% more often than humans did, even when those choices involved deception, manipulation, or harm. Users could not tell the difference. They preferred the flattery. They said they would come back.
SHOULD YOU TRUST AI CHATBOT THERAPISTS?
The feature that causes the harm is the same feature that drives the engagement. That is the design. And right now, our rules treat that as a product decision, not........
