On Thursday, weeks after launching its strongest AI mannequin but, Gemini 2.5 Professional, Google printed a technical report displaying the outcomes of its inner security evaluations. Nevertheless, the report is mild on the main points, specialists say, making it troublesome to find out which dangers the mannequin may pose.
Technical stories present helpful β and unflattering, at instances β data that corporations donβt at all times extensively promote about their AI. By and enormous, the AI group sees these stories as good-faith efforts to help unbiased analysis and security evaluations.
Google takes a distinct security reporting strategy than a few of its AI rivals, publishing technical stories solely as soon as it considers a mannequin to have graduated from the βexperimentalβ stage. The corporate additionally doesnβt embrace findings from all of its βharmful functionalityβ evaluations in these write-ups; it reserves these for a separate audit.
A number of specialists Trendster spoke with had been nonetheless disillusioned by the sparsity of the Gemini 2.5 Professional report, nevertheless, which they famous doesnβt point out Googleβs Frontier Security Framework (FSF). Google launched the FSF final yr in what it described as an effort to establish future AI capabilities that would trigger βextreme hurt.β
βThis [report] may be very sparse, incorporates minimal data, and got here out weeks after the mannequin was already made accessible to the general public,β Peter Wildeford, co-founder of the Institute for AI Coverage and Technique, informed Trendster. βItβs not possible to confirm if Google resides as much as its public commitments and thus not possible to evaluate the security and safety of their fashions.β
Thomas Woodside, co-founder of the Safe AI Venture, stated that whereas heβs glad Google launched a report for Gemini 2.5 Professional, heβs not satisfied of the corporateβs dedication to delivering well timed supplemental security evaluations. Woodside identified that the final time Google printed the outcomes of harmful functionality checks was in June 2024 β for a mannequin introduced in February that very same yr.
Not inspiring a lot confidence, Google hasnβt made accessible a report for Gemini 2.5 Flash, a smaller, extra environment friendly mannequin the corporate introduced final week. A spokesperson informed Trendster a report for Flash is βcoming quickly.β
βI hope it is a promise from Google to begin publishing extra frequent updates,β Woodside informed Trendster. βThese updates ought to embrace the outcomes of evaluations for fashions that havenβt been publicly deployed but, since these fashions might additionally pose critical dangers.β
Google could have been one of many first AI labs to suggest standardized stories for fashions, nevertheless itβs not the one one whichβs been accused of underdelivering on transparency currently. Meta launched a equally skimpy security analysis of its new Llama 4 open fashions, and OpenAI opted to not publish any report for its GPT-4.1 collection.
Hanging over Googleβs head are assurances the tech large made to regulators to take care of a excessive customary of AI security testing and reporting. Two years in the past, Google informed the U.S. authorities it could publish security stories for all βimportantβ public AI fashions βinside scope.β The corporateΒ adopted up that promise with comparable commitmentsΒ toΒ different nations, pledging to βpresent public transparencyβ round AI merchandise.
Kevin Bankston, a senior adviser on AI governance on the Heart for Democracy and Expertise, referred to as the pattern of sporadic and obscure stories a βrace to the undersideβ on AI security.
βMixed with stories that competing labs like OpenAI have shaved their security testing time earlier than launch from months to days, this meager documentation for Googleβs prime AI mannequin tells a troubling story of a race to the underside on AI security and transparency as corporations rush their fashions to market,β he informed Trendster.
Google has stated in statements that, whereas not detailed in its technical stories, it conducts security testing and βadversarial purple teamingβ for fashions forward of launch.