Why It Feels Wrong to Be Rude to AI

Decades before modern chatbots, research showed that people treated computers as if they were social beings.

AI creates an “as-if” relationship—something that feels social, even when you know it isn’t.

For many people, saying “Thank you” to a chatbot is a way of staying aligned with their own standards.

You open a chat window, type a question, and get a thoughtful reply in seconds. It’s efficient, helpful—and oddly social.

At the end, you hesitate. Do you just close the window? Or do you type “Thank you”?

Many people add the "Thank you." Some even say “Please.” And yet, there’s a quiet discomfort either way. It feels slightly ridiculous to be polite to a machine. But it also feels off not to be.

That small moment captures something important: We don’t just use AI. We relate to it.

Decades before modern chatbots, research showed that people treated computers as if they were social beings. In classic studies, participants were more polite when evaluating a computer on that same computer—as if they didn’t want to hurt its “feelings” (Reeves & Nass, 1996). Even when people knew they were interacting with a machine, they still followed social rules.

This isn’t confusion. It’s how the mind works.

Assuming There Is a Mind There

We naturally respond to anything that seems intentional as if it has a mind behind it. When something uses language, responds to us, and seems to “understand,” we automatically engage our social brain. Psychologist Nicholas Epley and colleagues have shown that humans readily attribute thoughts and intentions to nonhuman agents when they display even minimal social cues (Epley et al., 2007). From an evolutionary standpoint, this makes sense. It’s safer to assume there’s a mind there than to miss one.

Artificial intelligence (AI) systems now hit all the right signals. They respond fluidly. They track context. They mirror conversation. In other words, they behave in ways that invite us into a social exchange.

But that still leaves the more interesting question: Why does it feel uncomfortable not to be polite?

The answer has less to do with the machine—and more to do with you.

Maintaining Our Identity

Politeness isn’t just about other people. It’s part of how we regulate ourselves. Saying “please” or “thank you” reflects an internal standard—what sociologist Erving Goffman described as maintaining “face,” a sense of oneself as a certain kind of person (Goffman, 1959). When we drop those norms, even in a context where they aren’t required, it can feel like a small break in our own identity.

The discomfort isn’t about harming the AI. It’s about stepping outside the version of yourself you’re used to being.

AI creates a new kind of situation: one where the usual social rules don’t strictly apply, but your instincts are still active. With no real person on the other side, you’re free to be blunt, transactional, even dismissive. But many people find they don’t actually like how that feels.

In that sense, interacting with AI poses a subtle question: Who are you when you don’t have to be considerate?

Research on human–robot interaction hints at this tension. People often hesitate to “mistreat” robots or virtual agents, even when they know those systems have no feelings (Darling, 2016). The hesitation isn’t about the machine. It’s about the act itself. It feels like a rehearsal of behavior that carries meaning, regardless of the target.

What makes AI different from other objects is that it keeps the interaction going. You can yell at a GPS or talk to your dog, but those exchanges are limited. AI responds in full sentences, adjusts to your tone, and sustains the rhythm of conversation. It creates what psychologists sometimes describe as an “as-if” relationship—something that feels social, even when you know it isn’t.

And behavior practiced in one context rarely stays contained. Not in a simple, direct way—but through gradual shifts in what feels normal.

If AI becomes a space where social norms are routinely dropped, that may subtly change how those norms feel elsewhere. On the other hand, maintaining them—even when unnecessary—can reinforce a sense of consistency in how you relate to others.

This isn’t about whether AI deserves politeness. It doesn’t. It has no awareness, no feelings, no capacity to care.

The more revealing question is why you might.

For many people, saying “Thank you” to a chatbot isn’t about the machine at all. It’s a way of staying aligned with their own standards—a small act of consistency in a context where the rules are unclear.

And that may be why the whole experience feels slightly uncanny. You are having a social interaction with something that has no inner life—yet your side of the interaction is still real. Your reactions, your tone, your sense of self in the exchange—all of that is genuine.

AI doesn’t blur the line between human and machine as much as it exposes something else: how quickly we bring our social selves to anything that behaves enough like a mind.

So if you find yourself typing “Please” or “Thank you,” you’re not being naïve. You’re seeing, in real time, how deeply your behavior is guided not by necessity, but by identity.

AI doesn’t require politeness. But it may reveal whether you do.

Copyright 2026 Tara Well, Ph.D.

Darling, K. (2016). Extending legal protection to social robots. We Robot Conference. http://dx.doi.org/10.2139/ssrn.2044797

Epley, N., Waytz, A., & Cacioppo, J. T. (2007). On seeing human: A three-factor theory of anthropomorphism. Psychological Review, 114(4), 864–886.

Goffman, E. (1959). The Presentation of Self in Everyday Life.

Reeves, B., & Nass, C. (1996). The Media Equation. Cambridge University Press.

There was a problem adding your email address. Please try again.

By submitting your information you agree to the Psychology Today Terms & Conditions and Privacy Policy


© Psychology Today