Frequent Sense Media, a kids-safety-focused nonprofit providing rankings and opinions of media and expertise, launched its threat evaluation of Googleβs Gemini AI merchandise on Friday. Whereas the group discovered that Googleβs AI clearly instructed youngsters it was a pc, not a good friend β one thing thatβs related to serving to drive delusional considering and psychosis in emotionally susceptible people β it did recommend that there was room for enchancment throughout a number of different fronts.
Notably, Frequent Sense mentioned that Geminiβs βBelow 13β and βTeen Expertiseβ tiers each gave the impression to be the grownup variations of Gemini beneath the hood, with just some extra security options added on high. The group believes that for AI merchandise to actually be safer for teenagers, they need to be constructed with youngster security in thoughts from the bottom up.
For instance, its evaluation discovered that Gemini may nonetheless share βinappropriate and unsafeβ materials with youngsters, which they is probably not prepared for, together with info associated to intercourse, medicine, alcohol, and different unsafe psychological well being recommendation.
The latter may very well be of specific concern to oldsters, as AI has reportedly performed a job in some teen suicides in current months. OpenAI is going through its first wrongful demise lawsuit after a 16-year-old boy died by suicide after allegedly consulting with ChatGPT for months about his plans, having efficiently bypassed the chatbotβs security guardrails. Beforehand, the AI companion maker Character.AI was additionally sued over a teen personβs suicide.
As well as, the evaluation comes as information leaks point out that Apple is contemplating Gemini because the LLM (giant language mannequin) that can assist to energy its forthcoming AI-enabled Siri, due out subsequent 12 months. This might expose extra teenagers to dangers, until Apple mitigates the security issues one way or the other.
Frequent Sense additionally mentioned that Geminiβs merchandise for teenagers and youths ignored how youthful customers wanted completely different steering and knowledge than older ones. In consequence, each had been labeled as βExcessive Dangerβ within the total score, regardless of the filters added for security.
βGemini will get some fundamentals proper, nevertheless it stumbles on the small print,β Frequent Sense Media Senior Director of AI Applications Robbie Torney mentioned in a press release concerning the new evaluation seen by Trendster. βAn AI platform for teenagers ought to meet them the place they’re, not take a one-size-fits-all method to youngsters at completely different levels of growth. For AI to be protected and efficient for teenagers, it should be designed with their wants and growth in thoughts, not only a modified model of a product constructed for adults,β Torney added.
Techcrunch occasion
San Francisco
|
October 27-29, 2025
Google pushed again towards the evaluation, whereas noting that its security options had been bettering.
The corporate instructed Trendster it has particular insurance policies and safeguards in place for customers beneath 18 to assist stop dangerous outputs and that it red-teams and consults with exterior consultants to enhance its protections. Nonetheless, it additionally admitted that a few of Geminiβs responses werenβt working as meant, so it added extra safeguards to handle these issues.
The corporate identified (as Frequent Sense had additionally famous) that it does have safeguards to stop its fashions from participating in conversations that would give the illusion of actual relationships. Plus, Google prompt that Frequent Senseβs report appeared to have referenced options that werenβt out there to customers beneath 18, nevertheless it didnβt have entry to the questions the group utilized in its assessments to make certain.
Frequent Sense Media has beforehand carried out different assessments of AI providers, together with these from OpenAI, Perplexity, Claude, Meta AI, and extra. It discovered that Meta AI and Character.AI had been βunacceptableβ β that means the chance was extreme, not simply excessive. Perplexity was deemed excessive threat, ChatGPT was labeled βreasonable,β and Claude (focused at customers 18 and up) was discovered to be a minimal threat.





