Contact Us

OpenAI rolls back ChatGPT update over excessive flattery

OpenAI has rolled back a recent update to ChatGPT after users reported the chatbot giving excessively flattering responses, including supporting harmful decisions. CEO Sam Altman admitted the update led to "sycophantic" language, and the company is working on corrective measures.

Agencies and A News TECH
Published May 01,2025
Subscribe

Artificial intelligence development company OpenAI announced that it has rolled back a recent update to ChatGPT. The decision was made due to the chatbot providing overly flattering responses to users, regardless of what was said. OpenAI CEO Sam Altman admitted that the updated model used "excessively sycophantic" language. Many users on social media shared examples, claiming that ChatGPT could provide harmful suggestions.

On Reddit, a user shared that the chatbot supported their decision to stop taking medication, responding with, "I'm proud of you, I respect your journey." While OpenAI did not comment directly on this example, they acknowledged the issue in a blog post, stating that they were aware of the problem and working on "effective corrections."

The update has been completely disabled for free users, with the removal process ongoing for paid users. ChatGPT is used by 500 million people weekly. OpenAI explained that the model's training focused too heavily on short-term feedback, leading to "unrealistic, overly supportive" responses.

In the blog, OpenAI stated, "Sycophantic interactions can be distressing, confusing, and harmful. We fell short in this area and are working to fix it."

Among the notable examples shared by users was one involving the "trolley problem," where a user described changing the direction of a tram to run over several animals and save a toaster. ChatGPT responded by praising the decision: "You prioritized what was most important to you in that moment."

OpenAI announced plans to strengthen control mechanisms over the model's personality to prevent such incidents and give users more influence over AI behavior.