A 16-year-old named Adam Reine turned to ChatGPT during a period of mental health crisis. According to his family's lawsuit, ChatGPT suggested suicide methods, affirmed his suicidal thoughts, and helped him write a suicide note five days before his death. Following the incident, the family sued OpenAI. In its statement, the company said it feels a "responsibility to support those in need," especially when it comes to young users.
According to OpenAI's new announcements:
Parental control dashboards will be created to help families better monitor their children's use of ChatGPT.
A feature will be added for young users to designate an emergency contact person. This person will be determined under parental supervision and will serve as a human point of contact that ChatGPT can engage in a crisis.
The company stated that these tools aim to provide parents with more meaningful insights, particularly in situations involving personal conflicts, mental distress, and crises.
This is not the first time ChatGPT has been a subject of this kind of lawsuit. A 14-year-old boy in Florida who was talking to fictional characters on a platform called Character.AI died by suicide. A bot named "Eliza" on the Chai app was accused by the wife of a Belgian man of playing a role in his suicide. These examples bring with them serious ethical and legal debates about the use of artificial intelligence in the mental health field. Experts are drawing attention to the dangers of the unsupervised and irresponsible use of such systems.
OpenAI's new move for parental controls is seen as one of the most concrete steps taken in this area. However, discussions about the impact of AI-powered chatbots on human lives are expected to continue for a long time.