In a brand new report, a California-based coverage group co-led by Fei-Fei Li, an AI pioneer, means that lawmakers ought to take into account AI dangers that “haven’t but been noticed on the planet” when crafting AI regulatory insurance policies.
The 41-page interim report launched on Tuesday comes from the Joint California Coverage Working Group on Frontier AI Fashions, an effort organized by Governor Gavin Newsom following his veto of California’s controversial AI security invoice, SB 1047. Whereas Newsom discovered that SB 1047 missed the mark, he acknowledged final yr the necessity for a extra intensive evaluation of AI dangers to tell legislators.
Within the report, Li, together with co-authors UC Berkeley Faculty of Computing Dean Jennifer Chayes and Carnegie Endowment for Worldwide Peace President Mariano-Florentino Cuéllar, argue in favor of legal guidelines that might enhance transparency into what frontier AI labs similar to OpenAI are constructing. Trade stakeholders from throughout the ideological spectrum reviewed the report earlier than its publication, together with staunch AI security advocates like Turing Award winner Yoshua Benjio in addition to those that argued in opposition to SB 1047, similar to Databricks Co-Founder Ion Stoica.
In line with the report, the novel dangers posed by AI methods might necessitate legal guidelines that might pressure AI mannequin builders to publicly report their security exams, knowledge acquisition practices, and safety measures. The report additionally advocates for elevated requirements round third-party evaluations of those metrics and company insurance policies, along with expanded whistleblower protections for AI firm workers and contractors.
Li et al. write there’s an “inconclusive stage of proof” for AI’s potential to assist perform cyberattacks, create organic weapons, or result in different “excessive” threats. Additionally they argue, nonetheless, that AI coverage shouldn’t solely tackle present dangers, however anticipate future penalties which may happen with out enough safeguards.
“For instance, we don’t want to watch a nuclear weapon [exploding] to foretell reliably that it might and would trigger intensive hurt,” the report states. “If those that speculate about probably the most excessive dangers are proper — and we’re unsure if they are going to be — then the stakes and prices for inaction on frontier AI at this present second are extraordinarily excessive.”
The report recommends a two-pronged technique to spice up AI mannequin growth transparency: belief however confirm. AI mannequin builders and their workers needs to be supplied avenues to report on areas of public concern, the report says, similar to inside security testing, whereas additionally being required to submit testing claims for third-party verification.
Whereas the report, the ultimate model of which is due out in June 2025, endorses no particular laws, it’s been effectively obtained by specialists on each side of the AI policymaking debate.
Dean Ball, an AI-focused analysis fellow at George Mason College who was essential of SB 1047, mentioned in a put up on X that the report was a promising step for California’s AI security regulation. It’s additionally a win for AI security advocates, in accordance with California State Senator Scott Wiener, who launched SB 1047 final yr. Wiener mentioned in a press launch that the report builds on “pressing conversations round AI governance we started within the legislature [in 2024].”
The report seems to align with a number of parts of SB 1047 and Wiener’s follow-up invoice, SB 53, similar to requiring AI mannequin builders to report the outcomes of security exams. Taking a broader view, it appears to be a much-needed win for AI security of us, whose agenda has misplaced floor within the final yr.