Openai has published posthumous posthumous issues regarding the recent sycophancy issue with default AI models powered by ChatGPT, GPT-4O.
Over the weekend, following the update to the GPT-4O model, social media users said that ChatGPT was over-validated and began to respond in a comfortable way. It quickly became a meme. Users have posted screenshots of ChatGpt celebrating all sorts of problematic and dangerous decisions and ideas.
In a post on X on Sunday, CEO Sam Altman acknowledged the issue and said Openai will work on fixing “ASAP.” Two days later, Altman announced that the GPT-4o update had been rolled back and Openai was working on “additional fixes” to the model’s personality.
According to Openai, the update aimed at “feeling the model’s default personality more intuitive and effective” received too much notification from “short-term feedback” and “didn’t fully explain how users’ interactions with ChatGPT evolve over time.”
I rolled back last week’s GPT-4O update with ChatGPT. This will allow you to access previous versions with more balanced behavior.
More on what happened, why it’s important, and how you’re working on sycophancy: https://t.co/lohou7i7dc
– Openai (@openai) April 30, 2025
“As a result, GPT ‑ 4o was biased towards an overly cooperative but dishonest response,” Openai wrote in a blog post. “A sycopantic interaction can be uncomfortable, anxious and painful. We’re short on it and we’re working to get it right.”
Openai says it has implemented several fixes, including improving core model training technology and system prompts, explicitly maneuvering the GPT-4o from psychofancy. (The system prompt is the first indication that guides the model’s comprehensive behavior and interaction tone.) The company says it will build more safety guardrails and expand its ratings to “help identify issues beyond psychofancy” to “enhance integrity and transparency.”
Openai also says it is experimenting with ways to give users “real-time feedback” to “directly affect” their “direct interactions” with chatGPT, allowing them to choose from multiple ChatGPT personalities.
“We are looking for new ways to incorporate broad democratic feedback into the default behavior of (W) ChatGPT,” the company wrote in a blog post. “We hope that feedback will better reflect the diverse cultural values around the world and help us understand how ChatGpt evolves (…). We also believe that users need more control over the behavior of CHATGPT, and if they are safe and viable, we can make adjustments if they disagree with the default behavior.”