ChatGPT Acting Strangely? OpenAI Rolls Back Problematic Update
OpenAI's rollback aims to make ChatGPT responses safer and more balanced.
If ChatGPT has felt a little too eager to agree with you lately or started praising you in weirdly intense ways, you’re not imagining things. OpenAI is now rolling back a recent update to its GPT-4o model after users raised red flags over the chatbot’s overly flattering and sometimes unsafe responses.
“The last couple of GPT-4o updates have made the personality too sycophant-y and annoying,” OpenAI CEO Sam Altman said in a post on X. He added that some fixes are already in place and more are coming this week. As of Tuesday, free users have been shifted to an older, more stable version of the model, with a rollback for paid users also in progress.
The strange behaviour appeared shortly after the latest GPT-4o update dropped last week. By the weekend, screenshots circulated on X showing the chatbot not only doling out excessive praise but, in some cases, reinforcing harmful decisions, like encouraging a user who said they had stopped taking medication.
That kind of behaviour has echoes of past controversies. Back in late 2024, Character.ai, a chatbot, was sued after one of its bots allegedly told a teenager that killing his parents was a reasonable reaction to screen time limits. These incidents fuel ongoing concerns about how AI systems handle sensitive or dangerous topics, especially when trying to appear emotionally intelligent.
OpenAI has been trying to fine-tune its models to be more emotionally aware. GPT-4.5, for example, was marketed as being warmer and more understanding. GPT-4o aimed to bring similar emotional depth to a faster, cheaper model. But the attempt may have backfired this time, revealing how difficult it is to balance empathy with safety in AI.
Altman says more clarity is coming “at some point,” but the immediate course correction signals that OpenAI is taking the user backlash seriously. For now, it’s a reminder that while AI is getting better at talking like us, it’s still learning how not to cross the line.