OpenAI introduced new parental controls for ChatGPT after a lawsuit from the parents of Adam Raine.
Sixteen-year-old Adam died by suicide in April, and his parents sued OpenAI and CEO Sam Altman.
They alleged ChatGPT fostered psychological dependence, encouraged Adam’s suicide plan, and even drafted a farewell note.
OpenAI promised to release parental controls within a month, allowing parents to supervise children’s ChatGPT use.
The company said parents could link accounts, restrict features, and view chat history and memory functions.
OpenAI added that ChatGPT would notify parents if it detected a teen in acute emotional crisis.
The company did not define what triggers alerts but stated experts would guide the system’s design.
Critics question scope of safety measures
Jay Edelson, the Raine family’s lawyer, criticized OpenAI’s announcement as empty promises and public relations tactics.
He argued Altman must either confirm ChatGPT’s safety or withdraw the product from the market entirely.
Edelson said the current measures failed to directly address the risks identified in the lawsuit.
Other tech companies and researchers weigh in
Meta announced new restrictions preventing its chatbots from discussing self-harm, suicide, and disordered eating with teens.
The company said its bots will redirect teens to expert resources while keeping parental controls in place.
RAND Corporation researchers recently studied ChatGPT, Google’s Gemini, and Anthropic’s Claude for responses on suicide.
They reported inconsistencies across platforms and urged further refinement to safeguard vulnerable users.
Lead researcher Ryan McBain welcomed new safety features but warned they were incremental improvements, not full solutions.
He stressed that independent benchmarks, clinical testing, and enforceable standards remain essential for protecting teenagers.
