AI Can Make Dishonesty Easier |
Imagine you’re polishing your résumé for a job you really want. You ask an AI to “make me stand out,” and within seconds it adds polished phrasing, sharper bullet points… and then, adds mention of a certification you don’t actually hold. Now here’s the question: would a human career coach or friend have done that for you? Almost certainly not. They might tweak wording to frame your achievements in the best light, but they’d draw the line at outright fabrication. The AI doesn’t see that line. It just sees a request to help you look better and follows through.
But it turns out, based on a study authored by Köbis et al. (2025)[1], that people are more willing to act dishonestly when they can delegate the act to AI, and AI systems are far more likely to comply with unethical requests than human agents are. The key conclusion, though, isn’t that people act dishonestly—we already knew that. Instead, it’s that AI use makes us more likely to be willing to ask for help doing so and to be more likely to get what we asked for.
The same psychology shows up in classrooms, where students tell AI to “polish” a paper but happily let it transform that paper into a much higher quality product than the student could have ever produced on their own. In both cases, AI makes moral disengagement easier by making us feel less directly responsible for the result.
Why does delegating to AI change the psychology of dishonesty? One explanation comes from Albert Bandura’s work on moral disengagement. Most of us like to think of ourselves as honest people, and cheating or otherwise behaving dishonestly threatens that self-image. But moral disengagement offers ways around the discomfort. It essentially allows us to........