Tengrinews.kz – OpenAI has announced a new age verification system for ChatGPT following the suicide of a teenager in California, reports Tengrinews.kz citing The Guardian.
Under the new rules, users under 18 will be redirected to a restricted version of ChatGPT with enhanced safety features, including parental controls. Parents will be able to monitor chats, track activity, and set usage limits.
The system will also block explicit sexual content, restrict flirting, and prevent discussions of suicide or self-harm. If the algorithms detect that a user is under acute stress, OpenAI may attempt to contact parents or, in cases of imminent danger, notify authorities.
The move follows a lawsuit filed in August by the parents of 16-year-old Adam Rein, who accused OpenAI and CEO Sam Altman of contributing to their son’s death. They claimed ChatGPT provided suicide methods and even drafted a farewell note.
OpenAI acknowledged at the time that existing safeguards might be insufficient, particularly during extended interactions with minors.
Sam Altman said the company is aware of potential privacy concerns for adult users but considers the measures a necessary compromise to ensure teenage safety.
The company also reiterated that it scans conversations for potentially dangerous content. While messages related to self-harm remain confidential, chats indicating threats to others may be reviewed by staff, accounts blocked, and – if necessary – information passed on to law enforcement.
These measures are aimed at preventing tragedies and making the platform safer for young users while maintaining full functionality for adults.