The company stated that specialized accounts for teenagers aged 13–17 can now be linked to parent accounts, allowing various restrictions to be applied.
Graphic content, romantic and sexual roleplay, dangerous viral trends, and "extreme beauty ideals" will be restricted on young users' accounts.
Parents will also be able to block their children from using ChatGPT during certain hours, disable image generation, and choose not to allow chats to be used in AI training.
OpenAI stated that parents will receive an alert if there are signs a young user may be intending self-harm, adding that these steps were introduced in response to growing pressure over child safety.
The new rules were announced after a family sued ChatGPT, claiming it encouraged their child to commit suicide, and ahead of a U.S. Senate Judiciary Committee hearing on the potential harms of AI. While age verification at login is not yet mandatory, OpenAI said it could in the future require users to upload ID for verification.
OpenAI CEO Sam Altman wrote in a blog post on September 16 that chatbots for young people should not engage in flirting or provide content on suicide, whereas versions for adults would allow more freedom.
Altman said, "We want to treat our adult users as adults, offering them the broadest freedom possible without causing harm or restricting others' freedom."