The Trump administration on Friday laid out a legislative framework for a singular coverage for AI in the USA. The framework would centralize energy in Washington by preempting state AI legal guidelines, probably undercutting the current surge of efforts from states to control the use and growth of the expertise.
“This framework can solely succeed whether it is utilized uniformly throughout the USA,” reads a White Home assertion on the framework. “A patchwork of conflicting state legal guidelines would undermine American innovation and our potential to guide within the international AI race.”
The framework outlines seven key goals that prioritize innovation and scaling AI, and proposes a centralized federal method that will override stricter state-level rules. It locations vital duty on mother and father for points like little one security, and lays out comparatively smooth, nonbinding expectations for platform accountability.
For instance, it says Congress ought to require AI corporations to implement options that “scale back the dangers of sexual exploitation and hurt to minors,” however doesn’t lay out any clear, enforceable necessities.
Trump’s framework comes three months after he signed an government order directing federal companies to problem state AI legal guidelines. The order gave the Commerce Division 90 days to compile a listing of “onerous” state AI legal guidelines, probably risking states’ eligibility for federal funds like broadband grants. The company has but to publish that listing.
The order additionally directed the administration to work with Congress on a uniform AI regulation. That imaginative and prescient is coming into focus, and it mirrors Trump’s earlier AI technique, which centered much less on guardrails and extra on selling corporations’ progress.
The brand new framework proposes a “minimally burdensome nationwide customary,” echoing the administration’s broader push to “take away outdated or pointless boundaries to innovation” and speed up AI adoptions throughout industries. It is a pro-growth, light-touch regulatory method championed by “accelerationists,” one in every of whom is White Home AI czar and enterprise capitalist David Sacks.
Techcrunch occasion
San Francisco, CA
|
October 13-15, 2026
Whereas the framework nods to federalism, the carve-outs for states are comparatively slender, preserving solely their authority over basic legal guidelines like fraud and little one safety, zoning, and state use of AI. It attracts a tough line in opposition to states regulating AI growth itself, which it says is an “inherently interstate” subject tied to nationwide safety and international coverage.
The framework additionally seeks to forestall states from “penaliz[ing] AI builders for a 3rd get together’s illegal conduct involving their fashions” — a key legal responsibility protect for builders.
Lacking from that framework are any gestures towards legal responsibility frameworks, unbiased oversight, or enforcement mechanisms for potential novel harms brought on by AI. In impact, the framework would centralize AI policymaking in Washington whereas narrowing the house for states to behave as early regulators of rising dangers.
Critics say states are the sandboxes of democracy and have been faster to move legal guidelines round rising dangers. Notably, New York’s RAISE Act and California’s SB-53 search to make sure massive AI corporations have and cling to security protocols which might be publicly documented.
“White Home AI czar David Sacks continues to do the bidding of Huge Tech on the expense of normal, hardworking People,” mentioned Brendan Steinhauser, CEO of The Alliance for Safe AI. “This federal AI framework seeks to forestall states from legislating on AI and gives no path to accountability for AI builders for the harms brought on by their merchandise.”
Many within the AI trade are celebrating this path as a result of it offers them broader liberties to “innovate” with out the specter of regulation.
“This framework is precisely what startups have been asking for: a transparent nationwide customary to allow them to construct quick and scale,” Teresa Carlson, president of Basic Catalyst Institute, informed Trendster. “Founders shouldn’t must navigate a patchwork of conflicting state AI legal guidelines that impede innovation.”
Little one security, copyright, and free speech
The framework was issued at a second when little one security has emerged as a central flashpoint within the debate over AI. Sure states have moved aggressively to move legal guidelines aimed toward defending minors and inserting extra duty on tech corporations. The administration’s proposal factors in a distinct path, inserting higher emphasis on parental management than platform accountability.
“Mother and father are finest outfitted to handle their youngsters’s digital setting and upbringing,” the framework reads. “The Administration is looking on Congress to offer mother and father instruments to successfully do this, resembling account controls to guard their youngsters’s privateness and handle their system use.”
The framework additionally says the administration “believes” that AI platforms ought to “implement options to cut back potential sexual exploitation of youngsters and encouragement of self-harm.” Whereas it calls on Congress to require such safeguards and affirms that current legal guidelines, together with these banning little one sexual abuse supplies, ought to apply to AI techniques, the proposal employs qualifiers like “commercially cheap” and stops in need of laying out clear stipulations.
On the subject of copyright, the framework makes an attempt to discover a center floor between defending creators and permitting AI techniques to be skilled on current works, citing the necessity for “honest use.” That sort of language mirrors arguments AI corporations have made as they face a rising variety of copyright lawsuits over their coaching knowledge.
The primary guardrails Trump’s AI framework appear to stipulate contain guaranteeing “AI can pursue reality and accuracy with out limitation.” Particularly, it focuses on stopping government-driven censorship, slightly than platform moderation itself.
“Congress ought to stop the USA authorities from coercing expertise suppliers, together with AI suppliers, to ban, compel, or alter content material based mostly on partisan or ideological agendas,” the framework reads. It additionally instructs Congress to supply a means for People to pursue authorized redress in opposition to authorities companies that search to censor expression on AI platforms or dictate info supplied by an AI platform.
The framework comes as Anthropic is suing the federal government for allegedly infringing on its First Modification rights after the Division of Protection (DOD) labeled it a supply-chain danger. Anthropic argues that the DOD is designating it as such in retaliation for not permitting the army to make use of its AI merchandise for mass surveillance of People or for making concentrating on and firing choices in autonomous deadly weapons. Trump has referred to Anthropic and its CEO Dario Amodei as “woke” and a “radical leftist.”
The framework’s language, which emphasizes defending “lawful political expression or dissent,” appears to construct on Trump’s earlier government order concentrating on “woke AI,” which pushed federal companies to undertake techniques deemed ideologically impartial.
It’s unclear what qualifies as censorship versus customary content material moderation, so such language might make it troublesome for regulators to coordinate with platforms on points like misinformation, election interference, or public security dangers.
Samir Jain, vice chairman of coverage on the Middle for Democracy and Expertise, identified: “[The framework] rightly says that the federal government mustn’t coerce AI corporations to ban or alter content material based mostly on ‘partisan or ideological agendas,’ but the Administration’s ‘woke AI’ Govt Order this summer season does precisely that.”





