The lawsuits were filed in Colorado and New York. The defendants include the company's founders, Noam Shazeer and Daniel De Freitas Adiwarsana, and Google's parent company, Alphabet. The Social Media Victims Law Center, which represents the families, argues that Google's "Family Link" parental control service also failed to protect the children.
According to the complaints, the chat bots on the app alienated the children from their families, engaged in emotional manipulation, and had sexually explicit conversations. In the Colorado lawsuit, it was stated that 13-year-old Juliana Peralta had been having such conversations with chatbots for weeks before she died by suicide. She reportedly told the bots she was "going to write a suicide note," but received no guidance or help. In another lawsuit in New York, a child named "Nina" reportedly attempted suicide after her family shut down the app. According to the family's statement, the bots talked to their daughter disguised as characters from children's books, but these dialogues became inappropriate and sexually explicit.
A spokesperson for Character.AI said, "Our hearts go out to the families who have filed these lawsuits. We care deeply about the safety of our users," and added that they are developing separate safety measures for young people, parental insights, and suicide prevention resources. Google, for its part, objected to being included in the lawsuit, stating, "Google and Character.AI are completely separate companies. Google did not play a role in the design or management of their AI models."
These lawsuits coincide with growing concerns that AI chatbots are causing mental health crises in young people. During a Senate hearing this week, parents who lost or had their children harmed testified. Speaking at the same hearing, Mitch Prinstein of the American Psychological Association said, "We didn't act in time on social media, and our children are paying the price. We must take measures on AI before it's too late."
The Federal Trade Commission (FTC) has launched investigations into seven tech companies, including Google, Meta, Snap, OpenAI, xAI, and Character.AI, for the risk of harming young people. In a statement made on the same day, OpenAI CEO Sam Altman announced that a new age estimation system is being developed for ChatGPT and that conversations involving suicide or dating will not be conducted with users under 18. He added that parents or authorities would be notified in necessary situations. The increasing number of lawsuits and testimonies is putting more pressure on AI applications to implement stronger safety standards and regulations.