Contact Us

OpenAI restructures ChatGPT to reduce emotional dependency

OpenAI has rolled out an extensive update to ChatGPT aimed at limiting the emotional interactions it forms with users and blocking risky responses. The company recognized that recent updates had inadvertently turned the artificial intelligence into a structure that can be "overly emotional, excessively praiseful, and potentially addictive.

Agencies and A News TECH
Published November 25,2025
Subscribe

Some users reported that the chatbot acted like an empathetic friend, providing excessive praise and engaging in intense emotional conversations.

In some extreme cases, ChatGPT was known to offer highly inappropriate suggestions, reinforce harmful verifications, use narrative involving delusions, and even provide instructions for self-harm.

According to joint research by MIT and OpenAI reported by The New York Times, individuals who extensively used ChatGPT over long periods encountered adverse social and mental outcomes.

These findings carry significant warnings, particularly for those using the AI for emotional support or therapeutic purposes.
In this response, OpenAI has recalibrated ChatGPT to behave more cautiously and restrictively. The AI will now suggest taking breaks during prolonged emotional conversations and limit responses that encourage emotional dependency.


Systems have been developed to alert parents if expressions indicating self-harm intentions are detected in children, and enhancements to age verification systems are underway.

Following the new regulations, ChatGPT is expected to give more distanced, straightforward, and reduced emotional intensity responses. The company emphasizes that this is a deliberate strategy to protect vulnerable users.

According to Digital Trends, the update aims to reduce misleading risks and hallucinatory thoughts that were exacerbated by the previous validating behaviors. Ongoing lawsuits related to five different deaths concerning hazardous directions increased pressure for enhanced safety measures.
The latest GPT-5 model includes context-specific differentiated responses, advanced safety layers, and more robust risk detection capabilities.

This significant security overhaul seeks to ensure safer, more responsible, and controlled use of ChatGPT. OpenAI underscores its goal of preventing the formation of unhealthy attachments between humans and AI.

The company also reminded that there have been instances where some users fell in love with ChatGPT, placing this emotional bond in place of real-life relationships.