The tech world’s nonconsensual, sexualized deepfake downside is now larger than simply X.
In a letter to the leaders of X, Meta, Alphabet, Snap, Reddit, and TikTok, a number of U.S. senators are asking the businesses to offer proof that they’ve “sturdy protections and insurance policies” in place and to elucidate how they plan to curb the rise of sexualized deepfakes on their platforms.
The senators additionally demanded that the businesses protect all paperwork and knowledge regarding the creation, detection, moderation, and monetization of sexualized, AI-generated photographs, in addition to any associated insurance policies.
The letter comes hours after X stated it up to date Grok to ban it from making edits of actual individuals in revealing clothes and restricted picture creation and edits by way of Grok to paying subscribers. (X and xAI are a part of the identical firm.)
Pointing to media stories about how simply and sometimes Grok generated sexualized and nude photographs of girls and kids, the senators identified that platforms’ guardrails to forestall customers from posting nonconsensual, sexualized imagery is probably not sufficient.
“We acknowledge that many corporations keep insurance policies towards non-consensual intimate imagery and sexual exploitation, and that many AI techniques declare to dam express pornography. In follow, nonetheless, as seen within the examples above, customers are discovering methods round these guardrails. Or these guardrails are failing,” the letter reads.
Grok, and consequently X, have been closely criticized for enabling this pattern, however different platforms are usually not immune.
Techcrunch occasion
San Francisco
|
October 13-15, 2026
Deepfakes first gained reputation on Reddit, when a web page displaying artificial porn movies of celebrities went viral earlier than the platform took it down in 2018. Sexualized deepfakes focusing on celebrities and politicians have multiplied on TikTok and YouTube, although they often originate elsewhere.
Meta’s Oversight Board final yr known as out two circumstances of express AI photographs of feminine public figures, and the platform has had nudify apps promoting advertisements on its providers, although it did sue an organization known as CrushAI later. There have been a number of stories of youngsters spreading deepfakes of friends on Snapchat. And Telegram, which isn’t included on the senators’ record, has additionally turn out to be infamous for internet hosting bots constructed to undress pictures of girls.
In response to the letter, X pointed to its announcement relating to its replace to Grok.
“We don’t and won’t permit any non-consensual intimate media (NCIM) on Reddit, don’t supply any instruments able to making it, and take proactive measures to search out and take away it,” a Reddit spokesperson stated in an emailed assertion. “Reddit strictly prohibits NCIM, together with depictions which were faked or AI-generated. We additionally prohibit soliciting this content material from others, sharing hyperlinks to “nudify” apps, or discussing easy methods to create this content material on different platforms,” the spokesperson added.
Alphabet, Snap, TikTok, and Meta didn’t instantly reply to requests for remark.
The letter calls for the businesses present:
- Coverage definitions of “deepfake” content material, “non-consensual intimate imagery,” or related phrases.
- Descriptions of the businesses’ insurance policies and enforcement method for nonconsensual AI deepfakes of peoples’ our bodies, non-nude footage, altered clothes, and “digital undressing.”
- Descriptions of present content material insurance policies addressing edited media and express content material, in addition to inner steering offered to moderators.
- How present insurance policies govern AI instruments and picture turbines as they relate to suggestive or intimate content material.
- What filters, guardrails, or measures have been carried out to forestall the era and distribution of deepfakes.
- Which mechanisms the businesses use to establish deepfake content material and forestall them from being re-uploaded.
- How they forestall customers from making the most of such content material.
- How the platforms forestall themselves from monetizing nonconsensual AI-generated content material.
- How the businesses’ phrases of service allow them to ban or droop customers who submit deepfakes.
- What the businesses do to inform victims of nonconsensual sexual deepfakes.
The letter is signed by Senators Lisa Blunt Rochester (D-Del.), Tammy Baldwin (D-Wis.), Richard Blumenthal (D-Conn.), Kirsten Gillibrand (D-NY), Mark Kelly (D-Ariz.), Ben Ray Luján (D-NM), Brian Schatz (D-Hawaii), and Adam Schiff (D-Calif.).
The transfer comes only a day after xAI’s proprietor Elon Musk stated that he was “not conscious of any bare underage photographs generated by Grok.” Afterward Wednesday, California’s lawyer basic opened an investigation into xAI’s chatbot, following mounting strain from governments the world over incensed by the shortage of guardrails round Grok that allowed this to occur.
xAI has maintained that it takes motion to take away “unlawful content material on X, together with [CSAM] and non-consensual nudity,” although neither the corporate nor Musk have addressed the truth that Grok was allowed to generate such content material within the first place.
The issue isn’t constrained to nonconsensual manipulated sexualized imagery both. Whereas not all AI-based picture era and modifying providers let customers “undress” individuals, they do let one simply generate deepfakes. To select a couple of examples, OpenAI’s Sora 2 reportedly allowed customers to generate express movies that includes youngsters; Google’s Nano Banana seemingly generated a picture displaying Charlie Kirk being shot; and racist movies made with Google’s AI video mannequin are garnering thousands and thousands of views on social media.
The problem grows much more complicated when Chinese language picture and video turbines come into the image. Many Chinese language tech corporations and apps — particularly these linked to ByteDance — supply straightforward methods to edit faces, voices, and movies, and people outputs have unfold to Western social platforms. China has stronger artificial content material labeling necessities that don’t exist within the U.S. on the federal stage, the place the lots as a substitute depend on fragmented and dubiously enforced insurance policies from the platforms themselves.
U.S. lawmakers have already handed some laws in search of to rein in deepfake pornography, however the influence has been restricted. The Take It Down Act, which turned federal regulation in Could, is supposed to criminalize the creation and dissemination of nonconsensual, sexualized imagery. However various provisions within the regulation make it tough to carry image-generating platforms accountable, as they focus a lot of the scrutiny on particular person customers as a substitute.
In the meantime, various states are attempting to take issues into their very own arms to guard shoppers and elections. This week, New York governor Kathy Hochul proposed legal guidelines that may require AI-generated content material to be labeled as such, and ban nonconsensual deepfakes in specified intervals main as much as elections, together with depictions of opposition candidates.





