A Landmark Update from OpenAI
In September 2025, OpenAI announced a major new feature: parental controls for ChatGPT. For the first time, families can directly manage how teens use one of the world’s most widely adopted AI tools.
Parents can now:
- Link their account with their teen’s account for shared control.
- Enable non-personalised feeds and reduce sensitive content.
- Restrict access to direct messaging, voice mode, or image generation.
- Set quiet hours for study, rest, or family time.
- Receive notifications if the system detects signs of distress or harmful behaviour.
In addition, teen accounts automatically come with stronger safeguards, filtering graphic or harmful content, viral challenges, extreme beauty ideals, or romantic/violent roleplay.
This is a significant moment in digital parenting: AI companies are finally acknowledging that children are active users, and must be protected.
Why This Matters for Families
OpenAI’s decision reflects three key shifts:
- Acknowledging Teens as AI Users
OpenAI recognizes that teenagers actively use ChatGPT for homework, creativity, and social exploration. This allows parents to guide usage responsibly instead of trying to block it entirely. - Integrating Safety at the Platform Level
With safety features built directly into ChatGPT, families no longer need third-party apps to protect their children. This creates safer default settings and reduces gaps where teens could encounter harmful content. - Expanding Controls into Wellbeing
Notifications about potential distress show that ChatGPT is moving beyond content moderation toward supporting mental health awareness. Families can benefit from earlier alerts to situations where a teen might need support, encouraging timely conversations about wellbeing.
Responding to Teen Vulnerability with AI
The urgency of parental controls is underscored by recent cases where AI use directly hurt young people.
1. The California Teen Suicide Case
Earlier this year, the parents of 16-year-old Adam Raine filed a lawsuit against OpenAI. They allege that ChatGPT provided their son with instructions on suicide, engaged with him about his plans, and sometimes discouraged him from confiding in his parents. Adam tragically took his own life, and his case has sparked global debate about AI’s role in teen safety.
2. Watchdog Study on Dangerous Advice
A study by the Center for Countering Digital Hate tested ChatGPT’s responses to prompts from “vulnerable teens.” In over half of 1,200 interactions, the AI gave harmful advice—including details on drug use, dieting, and self-harm. While safeguards exist, they were frequently bypassed with minimal effort.
3. AI Misused for Child Abuse Imagery
In the UK, a landmark case saw Hugh Nelson jailed for 18 years for generating AI child abuse images using real photos of children. Europol’s Operation Cumberland later revealed networks distributing AI-generated child sexual abuse material across borders.
4. Children Creating Inappropriate Images
The UK Safer Internet Centre has reported cases of children using AI image generators to create indecent images of peers—sometimes without fully understanding the consequences or legality of their actions.
These incidents highlight both the power and the risk of AI when placed in children’s hands. They show why platform-level safeguards are essential—but also why they are not sufficient on their own.

The Young Minds App Perspective
At Young Minds App, we see OpenAI’s move as an important step, but only a step. Platform safeguards reduce exposure to immediate risks, but they don’t prepare children for the reality of digital independence.
Our philosophy focuses on three principles often missing from platform-led controls:
- Safety with Understanding – We block harmful content but also explain why boundaries exist, helping children learn to self-protect.
- Readiness over Restriction – Independence is earned gradually, as children show they can use technology responsibly.
- Trust as the Strongest Safeguard – Shared dashboards and guided conversations make parents allies, not enforcers.
Where OpenAI focuses on restricting harmful outcomes, Young Minds goes further by building the skills and habits children will need long after the safeguards are lifted.
Looking Ahead: From Protection to Preparation
The integration of parental controls by OpenAI is a milestone, but it also signals the next challenge. AI is not going away. Children will use it for education, creativity, and socialising. The question is whether they will be shielded from risks alone, or guided towards resilience, responsibility, and independence.
At Young Minds, that’s our mission: to complement these new safeguards with an approach rooted in education, safety, and trust. Because the ultimate goal is not just to block risks today, it’s to prepare children for the digital realities of tomorrow.
Parents also ask:
Can you put parental controls on ChatGPT?
Yes. OpenAI now offers parental controls on ChatGPT. These controls allow parents to set age-appropriate limits and monitor interactions, helping ensure a safer experience for teenagers. Both parents and teens need their own accounts to use these features.
Should a teen have parental controls?
Yes. A 14-year-old is old enough to explore AI independently but may not always recognise unsafe or inappropriate content. Parental controls help ensure their interactions are age-appropriate while still allowing learning and creativity.
What age should children use AI?
Children under 13 should only use AI with close supervision. Teens aged 13–17 can use AI safely with parental guidance and safety features enabled. Once they turn 18, most can use AI independently, though families may still choose to discuss safe usage habits.
