On Monday, Anthropic introduced an official endorsement of SB 53, a California invoice from state senator Scott Wiener that will impose first-in-the-nation transparency necessities on the worldβs largest AI mannequin builders. Anthropicβs endorsement marks a uncommon and main win for SB 53, at a time when main tech teams just like the Shopper Know-how Affiliation (CTA) and Chamber for Progress are lobbying towards the invoice.
βWhereas we imagine that frontier AI security is greatest addressed on the federal stage as a substitute of a patchwork of state rules, highly effective AI developments receivedβt look ahead to consensus in Washington,β stated Anthropic in a weblog publish. βThe query isnβt whether or not we’d like AI governance β itβs whether or not weβll develop it thoughtfully right this moment or reactively tomorrow. SB 53 affords a strong path towards the previous.β
If handed, SB 53 would require frontier AI mannequin builders like OpenAI, Anthropic, Google, and xAI to develop security frameworks, in addition to launch public security and safety reviews earlier than deploying highly effective AI fashions. The invoice would additionally set up whistleblower protections to staff who come ahead with security considerations.
Senator Wienerβs invoice particularly focuses on limiting AI fashions from contributing to βcatastrophic dangers,β which the invoice defines because the dying of not less than 50 individuals or greater than a billion {dollars} in damages. SB 53 focuses on the acute aspect of AI danger β limiting AI fashions from getting used to supply expert-level help within the creation of organic weapons or being utilized in cyberattacks β somewhat than extra near-term considerations like AI deepfakes or sycophancy.
Californiaβs Senate accredited a previous model of SB 53 however nonetheless wants to carry a remaining vote on the invoice earlier than it could possibly advance to the governorβs desk. Governor Gavin Newsom has stayed silent on the invoice up to now, though he vetoed Senator Weinerβs final AI security invoice, SB 1047.
Payments regulating frontier AI mannequin builders have confronted important pushback from each Silicon Valley and the Trump administration, which each argue that such efforts may restrict Americaβs innovation within the race towards China. Buyers like Andreessen Horowitz and Y Combinator led a few of the pushback towards SB 1047, and in latest months, the Trump administration has repeatedly threatened to dam states from passing AI regulation altogether.
One of the crucial frequent arguments towards AI security payments are that states ought to go away the matter as much as federal governments. Andreessen Horowitzβs head of AI coverage, Matt Perault, and chief authorized officer, Jai Ramaswamy, revealed a weblog publish final week arguing that lots of right this momentβs state AI payments danger violating the Structureβs Commerce Clause β which limits state governments from passing legal guidelines that transcend their borders and impair interstate commerce.
Techcrunch occasion
San Francisco
|
October 27-29, 2025
Nonetheless, Anthropic co-founder Jack Clark argues in a publish on X that the tech business will construct highly effective AI methods within the coming years and mightβt look ahead to the federal authorities to behave.
βWe have now lengthy stated we would like a federal normal,β stated Clark. βHowever within the absence of that this creates a strong blueprint for AI governance that can not be ignored.β
OpenAIβs chief international affairs officer, Chris Lehane, despatched a letter to Governor Newsom in August arguing that he shouldn’t cross any AI regulation that will push startups out of California β though the letter didn’t point out SB 53 by title.
OpenAIβs former head of coverage analysis, Miles Brundage, stated in a publish on X that Lehaneβs letter was βfull of deceptive rubbish about SB 53 and AI coverage typically.β Notably, SB 53 goals to solely regulate the worldβs largest AI corporations β significantly ones that generated a gross income of greater than $500 million.
Regardless of the criticism, coverage specialists say SB 53 is a extra modest method than earlier AI security payments. Dean Ball, a senior fellow on the Basis for American Innovation and former White Home AI coverage adviser, stated in an August weblog publish that he believes SB 53 has a great likelihood now of changing into legislation. Ball, who criticized SB 1047, stated SB 53βs drafters have βproven respect for technical actuality,β in addition to a βmeasure of legislative restraint.β
Senator Wiener beforehand stated that SB 53 was closely influenced by an professional coverage panel Governor Newsom convened β co-led byΒ main Stanford researcher and co-founder of World Labs, Fei-Fei LiΒ β to advise California on easy methods to regulate AI.
Most AI labs have already got some model of the inner security coverage that SB 53 requires. OpenAI, Google DeepMind, and Anthropic recurrently publish security reviews for his or her fashions. Nonetheless, these corporations aren’t certain by anybody however themselves, so generally they fall behind their self-imposed security commitments. SB 53 goals to set these necessities as state legislation, with monetary repercussions if an AI lab fails to conform.
Earlier in September, California lawmakers amended SB 53 to take away a piece of the invoice that will have required AI mannequin builders to be audited by third events. Tech corporations have beforehand fought a lot of these third-party audits in different AI coverage battles, arguing that theyβre overly burdensome.





