Elon Musk stated Wednesday he’s “not conscious of any bare underage photographs generated by Grok,” hours earlier than the California legal professional basic opened an investigation into xAI’s chatbot over the “proliferation of nonconsensual sexually specific materials.”
Musk’s denial comes as strain mounts from governments worldwide — from the U.Okay. and Europe to Malaysia and Indonesia — after customers on X started asking Grok to show photographs of actual girls, and in some circumstances kids, into sexualized photographs with out their consent. Copyleaks, an AI detection and content material governance platform, estimated roughly one picture was posted every minute on X. A separate pattern gathered from January 5 to January 6 discovered 6,700 per hour over the 24-hour interval. (X and xAI are a part of the identical firm.)
“This materials…has been used to harass individuals throughout the web,” stated California Legal professional Normal Rob Bonta in a press release. “I urge xAI to take quick motion to make sure this goes no additional.”
The AG’s workplace will examine whether or not and the way xAI violated the legislation.
A number of legal guidelines exist to guard targets of nonconsensual sexual imagery and baby sexual abuse materials (CSAM). Final 12 months the Take It Down Act was signed right into a federal legislation, which criminalizes knowingly distributing nonconsensual intimate photographs — together with deepfakes — and requires platforms like X to take away such content material inside 48 hours. California additionally has its personal sequence of legal guidelines that Gov. Gavin Newsom signed in 2024 to crack down on sexually specific deepfakes.
Grok started fulfilling person requests on X to provide sexualized photographs of ladies and kids towards the top of the 12 months. The pattern seems to have taken off after sure adult-content creators prompted Grok to generate sexualized imagery of themselves as a type of advertising, which then led to different customers issuing comparable prompts. In quite a lot of public circumstances, together with well-known figures like “Stranger Issues” actress Millie Bobby Brown, Grok responded to prompts asking it to change actual photographs of actual girls by altering clothes, physique positioning, or bodily options in overtly sexual methods.
In accordance with some stories, xAI has begun implementing safeguards to deal with the problem. Grok now requires a premium subscription earlier than responding to sure image-generation requests, and even then the picture will not be generated. April Kozen, VP of selling at Copyleaks, advised Trendster that Grok might fulfill a request in a extra generic or toned-down means. They added that Grok seems extra permissive with grownup content material creators.
Techcrunch occasion
San Francisco
|
October 13-15, 2026
“Total, these behaviors counsel X is experimenting with a number of mechanisms to scale back or management problematic picture era, although inconsistencies stay,” Kozen stated.
Neither xAI nor Musk has publicly addressed the issue head on. A number of days after the situations started, Musk appeared to make gentle of the problem by asking Grok to generate a picture of himself in a bikini. On January 3, X’s security account stated the corporate takes “motion towards unlawful content material on X, together with [CSAM],” with out particularly addressing Grok’s obvious lack of safeguards or the creation of sexualized manipulated imagery involving girls.
The positioning mirrors what Musk posted as we speak, emphasizing illegality and person habits.
Musk wrote he was “not conscious of any bare underage photographs generated by Grok. Actually zero.” That assertion doesn’t deny the existence of bikini pics or sexualized edits extra broadly.
Michael Goodyear, an affiliate professor at New York Legislation Faculty and former litigator, advised Trendster that Musk doubtless narrowly targeted on CSAM as a result of the penalties for creating or distributing artificial sexualized imagery of youngsters are higher.
“For instance, in the US, the distributor or threatened distributor of CSAM can withstand three years imprisonment below the Take It Down Act, in comparison with two for nonconsensual grownup sexual imagery,” Goodyear stated.
He added that the “larger level” is Musk’s try to attract consideration to problematic person content material.
“Clearly, Grok doesn’t spontaneously generate photographs. It does so solely in response to person request,” Musk wrote in his put up. “When requested to generate photographs, it’s going to refuse to provide something unlawful, because the working precept for Grok is to obey the legal guidelines of any given nation or state. There could also be instances when adversarial hacking of Grok prompts does one thing sudden. If that occurs, we repair the bug instantly.”
Taken collectively, the put up characterizes these incidents as unusual, attributes them to person requests or adversarial prompting, and presents them as technical points that may be solved by way of fixes. It stops wanting acknowledging any shortcomings in Grok’s underlying security design.
“Regulators might take into account, with consideration to free speech protections, requiring proactive measures by AI builders to forestall such content material,” Goodyear stated.
Trendster has reached out to xAI to ask what number of instances it caught situations of nonconsensual sexually manipulated photographs of ladies and kids, what guardrails particularly modified, and whether or not the corporate notified regulators of the problem. Trendster will replace the article if the corporate responds.
The California AG isn’t the one regulator to attempt to maintain xAI accountable for the problem. Indonesia and Malaysia have each quickly blocked entry to Grok; India has demanded that X make quick technical and procedural adjustments to Grok; the European Fee ordered xAI to retain all paperwork associated to its Grok chatbot, a precursor to opening a brand new investigation; and the U.Okay.’s on-line security watchdog Ofcom opened a proper investigation below the U.Okay.’s On-line Security Act.
xAI has come below fireplace for Grok’s sexualized imagery earlier than. As AG Bonta identified in a press release, Grok features a “spicy mode” to generate specific content material. In October, an replace made it even simpler to jailbreak what little security pointers there have been, leading to many customers creating hardcore pornography with Grok, in addition to graphic and violent sexual photographs.
Most of the extra pornographic photographs that Grok has produced have been of AI-generated individuals — one thing that many may nonetheless discover ethically doubtful however maybe much less dangerous to the people within the photographs and movies.
“When AI programs enable the manipulation of actual individuals’s photographs with out clear consent, the influence could be quick and deeply private,” Copyleaks co-founder and CEO Alon Yamin stated in a press release emailed to Trendster. “From Sora to Grok, we’re seeing a fast rise in AI capabilities for manipulated media. To that finish, detection and governance are wanted now greater than ever to assist stop misuse.”





