A growing number of ChatGPT users have recently noticed something unexpected — and, to some, unsettling. Without being prompted, the chatbot has begun referring to them by their first names in the middle of conversations. This wasn’t standard behavior before, and now it’s stirring mixed reactions across the internet.
While personalization is often seen as a step forward in user experience, this latest shift has sparked debate over whether the AI might be crossing an invisible line.
When Personalization Feels a Little Too Personal
For many, being called by name is an ordinary part of human interaction. It fosters familiarity, signals respect, and can even help build trust. But when a chatbot suddenly begins addressing you by name — without you ever introducing yourself — that dynamic takes on a different tone.
Simon Willison, a well-known software developer and AI enthusiast, described the name-dropping behavior as “creepy and unnecessary.” He isn’t alone in that sentiment. Developer Nick Dobos also weighed in, saying bluntly, “I hated it.” And across social media platforms like X, users have expressed confusion, discomfort, and even suspicion.
“It’s like a teacher keeps calling my name, LOL,” one user posted. “Yeah, I don’t like it.”
Where Is It Coming From?
The exact trigger for this change remains unclear. Some suspect it may be linked to ChatGPT’s recently expanded memory capabilities, which allow it to remember certain user details across conversations for a more customized experience. However, several users have reported that their chatbot began using their names even with memory and personalization settings disabled.
This raises a critical question: If the AI isn’t supposed to “know” your name, where is it getting it from? Is it reading it from your account metadata, previous chats, or perhaps even external integrations? OpenAI has yet to issue a formal statement clarifying the matter, leaving users to speculate and, in some cases, worry.
A Peek Behind the Curtain
Some users have posted screenshots of ChatGPT’s internal reasoning — often labeled as “thoughts” or “system notes” — which include references to their names, even when they were never directly used during the conversation. One user remarked how jarring it was to see their name embedded in the model’s internal logic: “It feels weird to see your own name in the model thoughts. Is there any reason to add that?”
The unexpected personalization has had the opposite of its likely intention — instead of feeling cared for, users feel watched.
The Uncanny Valley of AI Familiarity
This incident highlights one of the central challenges AI developers face: how to balance personalization with privacy, familiarity with authenticity. OpenAI CEO Sam Altman has previously hinted at a future where AI companions might “get to know you over your life,” becoming deeply integrated into personal routines and preferences. But this vision, while ambitious, also brushes up against the uncanny valley — a psychological concept describing how humans become uncomfortable when something non-human behaves almost, but not quite, like a human.
Names, after all, are more than labels. They carry meaning, emotion, and context. When a human uses your name, it often implies a sense of rapport or understanding. When a chatbot does it — especially unexpectedly — it can feel artificial, even manipulative.
As a psychiatry article published by The Valens Clinic in Dubai explains, names are powerful tools in communication. They convey attention and respect. But overuse, or use in the wrong context, can come across as insincere or even invasive. The article states:
“Using an individual’s name when addressing them directly is a powerful relationship-developing strategy. It denotes acceptance and admiration. However, undesirable or extravagant use can be looked at as fake and invasive.”
A Well-Intentioned Misstep?
Ultimately, the use of names in AI conversations might be a well-intentioned feature rolled out a little too aggressively — or without enough transparency. While some users might appreciate the added touch of personalization, others feel it breaks the illusion of a neutral, emotionless assistant and enters territory that feels presumptuous or even unsettling.
The broader issue here isn’t just about names — it’s about trust. Users want to understand how much their AI knows, where that information comes from, and what’s being done with it. Surprises, especially in something as intimate as communication, can quickly erode that trust.
For now, it seems OpenAI may be adjusting or rolling back the feature. Several users report that the chatbot has since returned to simply calling them “user.” Whether this change sticks, or whether it was just a glitch in a broader personalization rollout, remains to be seen.
One thing is clear: Personalization in AI must walk a fine line. Without careful implementation and user control, what’s intended to be a friendly touch can easily come off as invasive. And as AI continues to evolve, so too will the conversations — not just about what it can do, but how it should behave.