Google’s AI chatbot, Gemini, has been labelled “high risk” for children and teenagers in a new safety report by Common Sense Media, a nonprofit focused on kids’ digital safety.
The study reveals surprising safety gaps in the popular AI tool, raising alarms about inappropriate content exposure and mental health risks among young users.
Google Gemini: Not built for kids
The report found that both the Gemini Under 13 and Teen Experience versions are essentially adult AI chatbots with added safety tweaks, but not designed from the ground up for children.
Robbie Torney, Director of AI Programs at Common Sense Media, said, “An AI platform for kids should meet them where they are, not take a one-size-fits-all approach.”
Despite filters intended to block harmful content, Gemini sometimes shares material related to sex, drugs, alcohol, or unsafe mental health advice. Worryingly, it may fail to recognise severe mental health symptoms in young users, leaving them vulnerable to risky information.
Risks linked to mental health and safety
This evaluation occurs in the context of reports that associate AI chatbots, such as Gemini, with incidents of suicide and self-harm among adolescents. The technology can still produce harmful or misleading advice, even though Google’s model designates itself as a computer, not a friend.
Common Sense Media explicitly advises against using AI chatbots for individuals under five and suggests that children aged six to twelve be supervised by their parents. It also cautions that individuals under 18 should not depend on AI chatbots for emotional support or mental health.
Calls for stronger protections and transparency
Experts criticise Google’s safety disclosures as sparse and say the AI industry risks lowering safety standards. Parents are urged to set boundaries and monitor children’s access since current safeguards do not fully address these risks.
Rumours that Apple plans to use Gemini for Siri’s AI backend next year have caused growing concerns over exposure among teens, which will deepen unless stricter guardrails are implemented.
The latest research forces tech companies to reconsider how AI products are modified for younger users, emphasising customised safety more than adapting adult systems to children’s needs.
Experts and parents are urging stricter safeguards to avoid adverse effects on kids and teenagers in the quickly developing field of artificial intelligence.