California Governor Gavin Newsom signed a landmark invoice on Monday that regulates AI companion chatbots, making it the primary state within the nation to require AI chatbot operators to implement security protocols for AI companions.
The legislation, SB 243, is designed to guard kids and susceptible customers from among the harms related to AI companion chatbot use. It holds firms — from the massive labs like Meta and OpenAI to extra targeted companion startups like Character AI and Replika — legally accountable if their chatbots fail to fulfill the legislation’s requirements.
SB 243 was launched in January by state senators Steve Padilla and Josh Becker, and gained momentum after the loss of life of teenager Adam Raine, who died by suicide after an extended sequence of suicidal conversations with OpenAI’s ChatGPT. The laws additionally responds to leaked inner paperwork that reportedly confirmed Meta’s chatbots had been allowed to interact in “romantic” and “sensual” chats with kids. Extra lately, a Colorado household has filed swimsuit towards role-playing startup Character AI after their 13-year-old daughter took her personal life following a sequence of problematic and sexualized conversations with the corporate’s chatbots.
“Rising expertise like chatbots and social media can encourage, educate, and join — however with out actual guardrails, expertise also can exploit, mislead, and endanger our youngsters,” Newsom stated in a press release. “We’ve seen some really horrific and tragic examples of younger individuals harmed by unregulated tech, and we gained’t stand by whereas firms proceed with out obligatory limits and accountability. We are able to proceed to guide in AI and expertise, however we should do it responsibly — defending our youngsters each step of the best way. Our youngsters’s security is just not on the market.”
SB 243 will go into impact January 1, 2026, and requires firms to implement sure options akin to age verification, and warnings concerning social media and companion chatbots. The legislation additionally implements stronger penalties for individuals who revenue from unlawful deepfakes, together with as much as $250,000 per offense. Firms should additionally set up protocols to handle suicide and self-harm, which will likely be shared with the state’s Division of Public Well being alongside statistics on how the service supplied customers with disaster middle prevention notifications.
Per the invoice’s language, platforms should additionally make it clear that any interactions are artificially generated, and chatbots should not symbolize themselves as healthcare professionals. Firms are required to supply break reminders to minors and stop them from viewing sexually specific pictures generated by the chatbot.
Some firms have already begun to implement some safeguards geared toward kids. For instance, OpenAI lately started rolling out parental controls, content material protections, and a self-harm detection system for kids utilizing ChatGPT. Character AI has stated that its chatbot features a disclaimer that each one chats are AI-generated and fictionalized.
Techcrunch occasion
San Francisco
|
October 27-29, 2025
Senator Padilla advised Trendster the invoice was “a step in the precise route” in the direction of placing guardrails in place on “an extremely highly effective expertise.”
“We have now to maneuver rapidly to not miss home windows of alternative earlier than they disappear,” Padilla stated. “I hope that different states will see the chance. I feel many do. I feel it is a dialog occurring everywhere in the nation, and I hope individuals will take motion. Definitely the federal authorities has not, and I feel we have now an obligation right here to guard probably the most susceptible individuals amongst us.”
SB 243 is the second vital AI regulation to return out of California in current weeks. On September twenty ninth, Governor Newsom signed SB 53 into legislation, establishing new transparency necessities on giant AI firms. The invoice mandates that giant AI labs, like OpenAI, Anthropic, Meta, and Google DeepMind, be clear about security protocols. It additionally ensures whistleblower protections for workers at these firms.
Different states, like Illinois, Nevada, and Utah, have handed legal guidelines to limit or totally ban using AI chatbots as an alternative to licensed psychological well being care.
Trendster has reached out to Character AI, Meta, OpenAI, and Replika for remark.
This text has been up to date with remark from Senator Padilla.





