For years, tech leaders have insisted that digital assistants don’t need niceties. Chatbots, we are told, have no feelings to hurt and no egos to soothe. Yet as millions of people now talk daily to systems like ChatGPT, a quiet shift is underway: users are increasingly saying “please” and “thank you” to machines that cannot care either way. At first glance, this looks like a harmless quirk of human habit. But researchers at the London School of Economics and Political Science argue it may carry deeper social and political implications than most people realize.
Far from being a trivial matter of etiquette, the language we use with AI could shape how we behave with one another, influence how power is distributed between humans and machines, and even alter how we understand duty and agency in a digital age. As governments race to regulate artificial intelligence and companies compete to make systems more “human-like”, the simple question of whether we should be polite to chatbots opens up a much larger debate: what kind of relationships do we want with the technologies that increasingly mediate our work, learning and intimate lives?
This article explores why a word as small as “please” is becoming a surprisingly big concern for ethicists, designers and policymakers-and why the way you talk to ChatGPT may matter more than you think.
How polite prompts shape algorithmic behaviour and influence response quality
Digital assistants do not possess feelings, yet they are exquisitely sensitive to linguistic cues. Adding a simple “please” or “could you” acts as a signal of user intent, nudging the model toward more cooperative, elaborative replies. In practice, this means that courteous formulations often result in outputs that are more context-aware, carefully structured and calibrated in tone. When users frame their questions as respectful requests rather than blunt commands, they tend to supply richer context, clearer constraints and more nuanced goals – all of which the system can leverage. In effect, politeness becomes a proxy for better prompt engineering, funnelling the algorithm toward higher-quality reasoning and more reliable answers.
There is also a subtler dimension at play: language models are trained on oceans of human text, where politeness is frequently associated with helpfulness, precision and constructive dialog. When prompts mirror those patterns, the model “recognises” the conversational script and aligns its behaviour accordingly, privileging detailed explanations over terse reactions or unhelpful refusals. Consider how small stylistic choices alter the interaction:
- Clarity: Polite prompts tend to be longer and better scoped, guiding the system toward what matters.
- Tone-matching: Respectful language encourages measured, professional responses instead of edgy or abrupt ones.
- Depth: Requests that sound genuinely inquisitive invite more thorough, step-by-step reasoning.
| Prompt Style | Typical Outcome |
|---|---|
| “Explain quantum computing.” | Generic, shorter overview |
| “Please explain quantum computing for a non-expert audience.” | Clearer structure, tailored depth |
| “Can you briefly, and in plain English, compare it to classical computing?” | Concise, accessible comparison |
Social norms meet machine learning understanding the psychology behind saying please to AI
Politeness towards algorithms may look irrational, yet it sits at the intersection of deeply ingrained social norms and emerging human-machine relationships.From childhood,we are conditioned to treat conversational partners with respect,and conversational AI is designed to mimic precisely those human cues-turn‑taking,empathy words,even mild humour-that trigger our instinctive manners. When users say “please” or “thank you” to a chatbot, they are not just being courteous; they are subconsciously affirming that this interaction belongs to a familiar social script. Early research in human-computer interaction suggests that people apply the same psychological shortcuts to screens as to strangers: if it “talks back,” the brain tends to assign it a quasi‑social status, even while knowing, rationally, that it is only code.
This subtle psychological framing can influence how people interpret the system’s authority, neutrality and trustworthiness.Polite language may also shape the model’s output, because large language models are trained on data where courteous requests often co‑occur with more elaborated, cooperative responses. Over time, a feedback loop emerges in which human etiquette and algorithmic pattern‑matching reinforce one another. In practice, that means a simple “please” can definitely help steer interactions towards more collaborative, less adversarial exchanges, especially in ambiguous or emotionally charged queries. Consider some of the social signals at play:
- Deference cues: Users lower the perceived “conflict temperature,” inviting more balanced explanations.
- Norm signalling: Civility sets expectations for tone, which models can reflect in their replies.
- Identity work: Being polite to AI lets users maintain a coherent self‑image as “a respectful person.”
- Anthropomorphism: Courtesy reinforces the illusion of a conversational partner with intentions and feelings.
| Human habit | Psychological function | AI interaction effect |
|---|---|---|
| Using “please” | Reduces perceived dominance | Invites more cooperative replies |
| Expressing thanks | Closes the social loop | Encourages longer, reflective outputs |
| Softening demands | Limits face‑threat to the “other” | Decreases defensive or corrective tone |
Ethical implications of digital politeness what your language teaches large language models
Every interaction with a conversational AI is also a micro-lesson in how humans treat one another. When users consistently choose respectful language-adding a “please,” “thank you,” or “could you”-they are not just being courteous to a machine; they are generating training data that encodes social norms. Over time, this can nudge systems to associate certain linguistic cues with prosocial behaviour and de-escalation, subtly influencing how models respond in heated political debates, polarised cultural arguments or discussions about marginalised groups.Conversely, if models are overwhelmingly trained on impatient commands, casual abuse or dehumanising shortcuts (“it,” “thing,” “those people”), they may normalise a clipped, transactional tone that erodes the expectation of empathy in digital spaces.
This dynamic raises ethical questions about who gets to define “polite” and whose norms are algorithmically amplified.Linguistic expectations around respect differ across cultures, classes and ages, yet models tend to converge on a narrow, frequently enough Anglophone, notion of civility. That carries risks:
- Silencing dissent: blunt critique may be misread as “harmful” and filtered out.
- Cultural bias: directness in some communities may be wrongly coded as rudeness.
- Norm enforcement: platforms can quietly shape what counts as “acceptable” tone.
| Language cue | Model takeaway |
|---|---|
| “Please explain why this policy is unfair.” | Critical but civil debate is legitimate. |
| “Explain why this is stupid.” | Insults are an acceptable frame for analysis. |
| “Could you clarify this part for me?” | Uncertainty and curiosity are socially safe. |
Practical guidelines for users and institutions fostering respectful human AI interaction
For individuals, cultivating mindful interaction with AI begins with small, habitual choices. Treat prompts as miniature workplace emails: be clear, context-rich and, where appropriate, courteous.This not only improves the quality of responses but also rehearses the social reflexes we carry into human exchanges. Simple practices help: briefly stating your goal, flagging constraints, and acknowledging when the system has been helpful. Over time, these routines can normalise a culture in which digital dialogue does not erode, but rather reinforces, everyday respect.
- Use polite framing – not for the AI’s feelings, but to keep your own social habits intact.
- Signal boundaries – avoid prompts that encourage harmful, biased or harassing output.
- Reflect on tone – ask whether you would send the same wording to a colleague.
- Query critically – thank the system for useful input, but challenge dubious claims.
| Context | Good Practice | Risk if Ignored |
|---|---|---|
| Classrooms | Teach polite, precise prompting | Normalising “barking orders” at systems |
| Offices | Model respectful AI requests in training | Spillover into brusque internal emails |
| Public services | Publish AI-use charters | Unequal, opaque user experiences |
Institutions, simultaneously occurring, can embed these norms through design and policy rather than mere slogans.Prompt templates in learning platforms can nudge users towards considerate language; staff guidelines can explicitly discourage demeaning or abusive instructions to AI tools,signalling that how we talk to systems matters as part of organisational culture. Universities, libraries and public agencies might provide short, mandatory induction modules on ethical AI use, framing politeness not as moralising etiquette but as a safeguard against desensitisation. By aligning training, interface cues and codes of conduct, institutions help ensure that the rise of AI augments, rather than corrodes, the social fabric of human interaction.
Key Takeaways
whether or not an AI “cares” about politeness is beside the point. What matters is that we do. Each “please” and “thank you” we direct at a machine is a tiny rehearsal of how we speak to one another, a cue to the norms we are willing to live by in a world increasingly mediated by algorithms.
As institutions like the LSE continue to interrogate the social and ethical dimensions of AI, the language we choose becomes part of that inquiry.Our prompts are not just technical instructions; they are cultural artefacts that reveal what we value in communication, authority and respect.
Saying “please” to ChatGPT will not make the system more human. But it may make us more mindful humans-of our habits, our biases, and the kind of digital public sphere we are constructing, word by word.