Some ChatGpt users have noticed some strange phenomenon recently. Occasionally, chatbots refer to them by name for reasons through the issue. It’s not the previous default behavior, claiming that some users are referring to their names, even though they never said what ChatGpt called their name.
Reviews are mixed. One user, software developer and AI enthusiast Simon Willison called the feature “creepy and unnecessary.” Another developer, Nick Dobos, said, “I hate it.” A rough search for X increases the scores of users who are confused and wary by the basic actions of the first name in ChatGpt.
“It’s like the teacher keeps calling my name, ha ha,” one user wrote. “Yeah, I don’t like it.”
Does anyone like what O3 uses your name in that chain of thought rather than finding something creepy and unnecessary? pic.twitter.com/lyrby6bk6j
– Simon Willison (@simonw) April 17, 2025
It is not clear when the changes occurred or whether they are related to ChatGpt’s upgraded “memory” feature. This allows chatbots to attract past chats and personalize responses. Some users of X say that ChatGpt started calling by name even though it disables memory and disables the associated personalization settings.
Openai did not respond to TechCrunch’s request for comment.
It feels strange to see your own name in the model’s ideas. Is there a reason to add that? Do you want to improve that, as you did with GitHub Repos, or simply increase the error? @openai o4-mini-high, is it really using it with a custom prompt? pic.twitter.com/j1vv7arbx4
– Debasish Pattanayak (@drdebmath) April 16, 2025
In any case, the blowback shows that Uncanny Valley Openai may have difficulty overcoming “personal” efforts more than those using ChatGpt. Last week, the company’s CEO, Sam Altman, hinted at an “very convenient and personalized” AI system because he “gets to know you in your life.” But judging by this latest wave of reaction, not everyone is on sale with this idea.
An article published by the Valens Clinic, a psychiatry firm in Dubai, may shed some light on the visceral response to the use of the ChatGpt name. The name conveys intimacy. But in some cases, if a person, or a chatbot, is used a lot of names, it becomes inaccurate.
“Using individual names when dealing directly with them is a strategy for building strong relationships,” writes Valens. “It means acceptance and praise. However, undesirable or extravagant use can be considered fake and invasive.”
In a similar vein, another reason why most people probably don’t want ChatGpt using their names is that they feel ham fist. Just as most people don’t want to call a toaster by their own name, they don’t want chatgpt to “pretend” to understand the importance of names.
The reporter certainly found it to be unstable when he said earlier this week that ChatGpt’s O3 was doing research on “Kyle.” (As of Friday, the change seemed to have been reversible; O3 called me “user.”) It was the opposite of the intended effect – poking holes in the illusion that the underlying model is more than programmable and synthetic.