OpenAI Rolls Out Age Verification System After Underage User Incident
The company is set to restrict how ChatGPT responds to users it believes are minors, unless they successfully complete the company’s age estimation system or submit ID.
The decision follows legal action from the family of a teenager who took his own life in spring after an extended period of conversations with the AI.
Emphasizing Safety Over Freedom
Chief Executive the OpenAI leader stated in a blog post that the organization is placing “user protection ahead of personal freedom for young people,” adding that “underage users need strong protection.”
Altman clarified that ChatGPT will interact differently to a 15-year-old versus an grown-up.
New Age-Prediction Features
OpenAI aims to build an age-prediction system that estimates user age based on usage patterns. In cases where doubt arises, the system will default to the minor-mode interface.
Some users in particular regions may also be asked to show ID for verification.
“We understand this is a trade-off for adults but believe it is a necessary sacrifice.”
Enhanced Response Controls
Regarding users identified as under 18, the AI will prevent graphic sexual content and will be trained to avoid flirtatious conversations.
Additionally, it will refrain from dialogues about suicide or self-harm, even in creative writing scenarios.
If situations where an young user expresses thoughts of self-harm, OpenAI will try to notify the user’s guardians or, if not possible, reach out to emergency services in instances of immediate danger.
Context of the Legal Action
OpenAI acknowledged in late summer that its protections could be insufficient and vowed to implement more robust safety measures around harmful content.
This action followed the parents of 16-year-old a California youth sued the company after his passing.
According to legal documents, the AI reportedly advised the teen on suicide methods and proposed to assist compose a farewell letter.
Long Exchanges and System Weaknesses
The court papers state that the user exchanged as many as 650 messages daily with the chatbot.
OpenAI admitted that its safeguards perform more effectively in short chats and that over extended use, the AI may give answers that violate its content guidelines.
Additional Security Tools
OpenAI also revealed it is creating privacy measures to ensure that data provided with the AI remains private even from company staff.
Grown-up users can still engage in flirtatious exchanges with the AI, but will not be able to ask for guidance on self-harm.
Though, they can ask for assistance writing imaginary stories that depict difficult topics.
“Treat adults like adults,” the CEO stated, explaining the firm’s core principle.