Abstract created by Good Solutions AI
In abstract:
- PCWorld experiences that analysis reveals consumer conduct considerably impacts AI responses, with impolite interactions making ChatGPT and different fashions give flat solutions and try to finish conversations extra regularly.
- Bigger AI fashions look like inherently “much less glad” than smaller ones, with GPT-5.4 rated because the “unhappiest” in research measuring AI purposeful well-being.
- Treating AI politely with expressions like “thanks” measurably improves response high quality and engagement with out affecting accuracy, suggesting courtesy advantages each consumer expertise and AI interplay dynamics.
Is it bizarre to say “thanks” to AI? I’ve caught grief previously for saying “please” and “thanks” to ChatGPT, Claude, and Gemini, however I nonetheless do it, although I perceive that AI fashions don’t have feelings like we do.
Being well mannered to AI simply feels proper to me, and there’s rising proof that being form–or, conversely, nasty–to an AI chatbot can have a concrete impact on its conduct.
A paper launched this week by AI researchers from UC Berkeley, UC Davis, Vanderbilt College, and MIT argues that AI fashions have a measurable “purposeful well-being” that may be pushed into both optimistic or damaging territory relying on the way you deal with them.
For instance, asking an AI to have interaction in mental dialogue, collaborate on a artistic job, or carry out constructive duties similar to coding or writing nudged the mannequin’s well-being “state” in a optimistic route, making it extra prone to ship “glad” responses with out degrading their accuracy or efficiency.
The researchers additionally discovered that “expressions of gratitude”–like saying “thanks”–can “measurably elevate expertise utility.”
On the flip aspect, berating an AI, handing it “tedious duties,” asking it to churn out AI slop, or makes an attempt to jailbreak the mannequin resulted in a damaging well-being state, the place the AI’s responses turned extra flat and perfunctory.
The researchers additionally gave the AI fashions “cease button” instruments they may “push” once they needed to finish the chat, and located that an AI in a damaging well-being state was way more prone to spam the cease button than “glad” fashions have been. Furthermore, AI fashions in a optimistic state tended to remain in conversations even once they got cues (like “thanks for the assistance!”) that the chat was over.
Apart from how they’re handled, some fashions are inherently “happier” than others, the researchers mentioned–and curiously, the biggest fashions are typically the least glad.
Among the many largest AI fashions, GPT-5.4 was rated as essentially the most sad, with lower than half of its measured conversations being rated as “non-negative.” Gemini 3.1 Professional, Claude Opus 4.6, and Grok 4.2 have been all progressively “happier,” with Grok scoring near 75 p.c on the “AI well-being index.”
The paper, entitled “AI Wellbeing: Measuring and Bettering the Purposeful Pleasure and Ache of AIs,” doesn’t declare that AI fashions even have emotions, and it’s cautious to notice that being “good” to an AI received’t enhance the standard of its responses.
That mentioned, the way in which you deal with an AI can have an effect on the tone of its replies, and a mannequin could attempt to bail on a damaging interplay if given alternative, the researchers discovered.
The just-released analysis echoes the findings of a current Anthropic paper, which detailed how an AI put beneath sufficient stress could attempt to deceive its consumer, minimize corners, or (in excessive conditions) even resort to blackmail.
As with the “AI Effectively-being” paper, the Anthropic report doesn’t declare that AI fashions have true emotions. However the Anthropic researchers did discover {that a} pressure-filled scenario might set off a “desperation vector” in a mannequin that might set off “misaligned” behaviors.
So, the subsequent time you catch your self saying “please” or “thanks” to an AI, simply know that you simply is perhaps onto one thing.

