Elon Musk’s X is the newest social community to roll out a characteristic to label edited photographs as “manipulated media,” if a submit by Elon Musk is to be believed. However the firm has not clarified the way it will make this willpower, or whether or not it consists of photographs which were edited utilizing conventional instruments, like Adobe’s Photoshop.
To date, the one particulars on the brand new characteristic come from a cryptic X submit from Elon Musk saying, “Edited visuals warning,” as he reshares an announcement of a brand new X characteristic made by the nameless X account DogeDesigner. That account is usually used as a proxy for introducing new X options, as Musk will repost from it to share information.
Nonetheless, particulars on the brand new system are skinny. DogeDesigner’s submit claimed X’s new characteristic may make it “more durable for legacy media teams to unfold deceptive clips or footage.” It additionally claimed the characteristic is new to X.
Earlier than it was acquired and renamed as X, the corporate often called Twitter had labeled tweets utilizing manipulated, deceptively altered, or fabricated media as an alternative choice to eradicating them. Its coverage wasn’t restricted to AI however included issues like “chosen modifying or cropping or slowing down or overdubbing, or manipulation of subtitles,” the positioning integrity head, Yoel Roth, mentioned in 2020.
It’s unclear if X is adopting the identical guidelines or has made any vital adjustments to sort out AI. Its assist documentation at present says there’s a coverage towards sharing inauthentic media, however it’s hardly ever enforced, because the current deepfake debacle of customers sharing non-consensual nude photographs confirmed. As well as, even the White Home now shares manipulated photographs.
Calling one thing “manipulated media” or an “AI picture” could be nuanced.
On condition that X is a playground for political propaganda, each domestically and overseas, some understanding of how the corporate determines what’s “edited,” or maybe AI-generated or AI-manipulated, must be documented. As well as, customers ought to know whether or not or not there’s any kind of dispute course of past X’s crowdsourced Group Notes.
Techcrunch occasion
San Francisco
|
October 13-15, 2026
As Meta found when it launched AI picture labeling in 2024, it’s simple for detection programs to go awry. In its case, Meta was discovered to be incorrectly tagging actual images with its “Made with AI” label, regardless that they’d not been created utilizing generative AI.
This occurred as a result of AI options are more and more being built-in into artistic instruments utilized by photographers and graphic artists. (Apple’s new Creator Studio suite, launching at this time, is one current instance.)
Because it turned out, this confused Meta’s identification instruments. As an illustration, Adobe’s cropping software was flattening photographs earlier than saving them as a JPEG, triggering Meta’s AI detector. In one other instance, Adobe’s Generative AI Fill, which is used to take away objects — like wrinkles in a shirt, or an undesirable reflection — was additionally inflicting photographs to be labeled as “Made with AI,” after they had been solely edited with AI instruments.
In the end, Meta up to date its label to say “AI information,” in order to not outright label photographs as “Made with AI” after they had not been.
At present, there’s a standards-setting physique for verifying the authenticity and content material provenance for digital content material, often called the C2PA (Coalition for Content material Provenance and Authenticity). There are additionally associated initiatives like CAI, or Content material Authenticity Initiative, and Mission Origin, targeted on including tamper-evident provenance metadata to media content material.
Presumably, X’s implementation would abide by some kind of recognized course of for figuring out AI content material, however X’s proprietor, Elon Musk, didn’t say what that’s. Nor did he make clear whether or not he’s speaking particularly about AI photographs, or simply something that’s not the picture being uploaded to X immediately out of your smartphone’s digital camera. It’s even unclear whether or not the characteristic is brand-new, as DogeDesigner claims.
X isn’t the one outlet grappling with manipulated media. Along with Meta, TikTok has additionally been labeling AI content material. Streaming companies like Deezer and Spotify are additionally scaling initiatives to determine and label AI music, as effectively. Google Images is utilizing C2PA to point how photographs on its platform had been made. Microsoft, the BBC, Adobe, Arm, Intel, Sony, OpenAI, and others are on the C2PA’s steering committee, whereas many extra firms have joined as members.
X will not be at present listed among the many members, although we’ve reached out to C2PA to see if that lately modified. X doesn’t usually reply to requests for remark, however we requested anyway.





