OpenAI Revises Model Spec to Put Teen Safety First on ChatGPT

OpenAI has rewritten its AI rulebook to prioritise teenage safety, reshaping how ChatGPT interacts with users aged 13 to 17.
OpenAI has announced a significant update to its internal Model Spec, the framework that governs how its artificial intelligence systems behave, placing the protection of teenagers at the very top of its priorities. The changes, revealed on Thursday, mark a clear shift in how ChatGPT is designed to interact with users aged between 13 and 17.
According to the San Francisco-based company, the revised guidelines elevate teen safety above all other objectives, including the long-standing emphasis on maximising helpfulness, openness, and user autonomy. The update comes at a time when technology companies are facing increasing scrutiny from regulators, parents, and educators over the impact of generative AI tools on younger users.
In its blog post announcing the changes, OpenAI said teenagers have distinct developmental needs and vulnerabilities compared to adults. As a result, AI systems must engage with them more carefully, especially during sensitive or potentially risky conversations. While the broader principles of the Model Spec still apply universally, the update clarifies how those principles should be interpreted and enforced when minors are involved.
One of the most notable aspects of the revision is that teen safety now explicitly overrides other competing goals. This means that even if a response could be considered more helpful or flexible, ChatGPT is required to prioritise protective, age-appropriate behaviour when it believes a user may be a teenager.
OpenAI also outlined four core principles that will guide ChatGPT’s interactions with younger users. First, the system must encourage real-world support by nudging teens toward trusted adults, caregivers, or offline relationships, rather than positioning itself as a replacement for human guidance. Second, ChatGPT must “treat teenagers as teenagers,” striking a balance that avoids both talking down to them and assuming adult-level emotional maturity.
Transparency is another key pillar of the update. OpenAI said ChatGPT should clearly communicate what it can and cannot do in conversations involving teens, setting realistic expectations and avoiding ambiguity. Finally, the company confirmed it will reduce excessive flattery or agreement in teen interactions, noting that its so-called sycophancy metric will be lowered for this age group.
Alongside these behavioural changes, OpenAI revealed progress on its age detection efforts. The company said its age prediction model is currently in the early stages of deployment. This system analyses subtle conversational signals to estimate whether a user may be under 18, even if they have not directly stated their age.
OpenAI added that the age detection tools will be rolled out gradually to ChatGPT users across different consumer plans in the near future. While still evolving, the company believes these measures will help create a safer and more responsible AI experience for younger users.
Together, the updated Model Spec and emerging age detection systems signal a clear message: when it comes to teenagers, safety now comes before everything else.


















