Teen safety, freedom, and privacy

OpenAI Blog News

Summary

OpenAI outlines its approach to balancing teen safety, user freedom, and privacy in ChatGPT, including building an age-prediction system, parental controls, and stricter content rules for under-18 users. The company also signals plans for advanced privacy features and advocates for AI conversation privilege with policymakers.

Explore OpenAI’s approach to balancing teen safety, freedom, and privacy in AI use.
Original Article Export to Word Export to PDF
View Cached Full Text

Cached at: 04/20/26, 02:52 PM

# Teen safety, freedom, and privacy Source: [https://openai.com/index/teen-safety-freedom-and-privacy/](https://openai.com/index/teen-safety-freedom-and-privacy/) OpenAISome of our principles are in conflict, and we’d like to explain the decisions we are making around a case of tensions between teen safety, freedom, and privacy\. It is extremely important to us, and to society, that the right to privacy in the use of AI is protected\. People talk to AI about increasingly personal things; it is different from previous generations of technology, and we believe that they may be one of the most personally sensitive accounts you’ll ever have\. If you talk to a doctor about your medical history or a lawyer about a legal situation, we have decided that it’s in society’s best interest for that information to be privileged and provided higher levels of protection\. We believe that the same level of protection needs to apply to conversations with AI which people increasingly turn to for sensitive questions and private concerns\. We are advocating for this with policymakers\. We are developing advanced security features to ensure your data is private, even from OpenAI employees\. Like privilege in other categories, there will be certain exceptions: for example, automated systems will monitor for potential serious misuse, and the most critical risks—threats to someone’s life, plans to harm others, or societal\-scale harm like a potential massive cybersecurity incident—may be escalated for human review\. The second principle is about freedom\. We want users to be able to use our tools in the way that they want, within very broad bounds of safety\. We have been working to increase user freedoms over time as our models get more steerable\. For example, the default behavior of our model will not lead to much flirtatious talk, but if an adult user asks for it, they should get it\. For a much more difficult example, the model by default should not provide instructions about how to commit suicide, but if an adult user is asking for help writing a fictional story that depicts a suicide, the model should help with that request\. “Treat our adult users like adults” is how we talk about this internally, extending freedom as far as possible without causing harm or undermining anyone else’s freedom\. **The third principle is about protecting teens\. We prioritize safety ahead of privacy and freedom for teens; this is a new and powerful technology, and we believe minors need significant protection\.** First, we have to separate users who are under 18 from those who aren’t \(ChatGPT is intended for people 13 and up\)\. We’re building an age\-prediction system to estimate age based on how people use ChatGPT\. If there is doubt, we’ll play it safe and default to the under\-18 experience\. In some cases or countries we may also ask for an ID; we know this is a privacy compromise for adults but believe it is a worthy tradeoff\. We will apply different rules to teens using our services\. For example, ChatGPT will be trained not to do the above\-mentioned flirtatious talk if asked, or engage in discussions about suicide of self\-harm even in a creative writing setting\. And, if an under\-18 user is having suicidal ideation, we will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm\. We shared more[today](https://openai.com/index/building-towards-age-prediction/)about how we’re building the age\-prediction system and new parental controls to make all of this work\. We realize that these principles are in conflict and not everyone will agree with how we are resolving that conflict\. These are difficult decisions, but after talking with experts, this is what we think is best and want to be transparent in our intentions\.

Similar Articles

Introducing parental controls

OpenAI Blog

OpenAI has launched parental controls for ChatGPT, allowing parents to link accounts with their teens and customize settings including content filters, messaging capabilities, and personalized feeds. The feature includes enhanced safeguards for teen accounts and is part of OpenAI's broader effort to make AI tools safer for younger users.

Building towards age prediction

OpenAI Blog

OpenAI is building an age prediction system for ChatGPT to tailor experiences for users under 18, with automatic content restrictions and parental control features launching by month-end. The system will default to the safer under-18 experience when age is uncertain, and includes new features like blackout hours and distress notifications for parents.

Updating our Model Spec with teen protections

OpenAI Blog

OpenAI has updated its Model Spec with new Under-18 Principles to guide ChatGPT's behavior for teen users aged 13-17, focusing on safety, age-appropriate interactions, and stronger guardrails around high-risk topics like self-harm and explicit content. The update was developed with input from the American Psychological Association and is grounded in developmental science.

Our approach to age prediction

OpenAI Blog

OpenAI is rolling out an age prediction model on ChatGPT to identify accounts likely belonging to users under 18 and apply appropriate safeguards. The system uses behavioral and account-level signals to estimate age and restricts access to sensitive content for minors, with options for age verification and parental controls.