Meta fixes bug that could leak users’ AI prompts and generated content

Must Read
bicycledays
bicycledayshttp://trendster.net
Please note: Most, if not all, of the articles published at this website were completed by Chat GPT (chat.openai.com) and/or copied and possibly remixed from other websites or Feedzy or WPeMatico or RSS Aggregrator or WP RSS Aggregrator. No copyright infringement is intended. If there are any copyright issues, please contact: bicycledays@yahoo.com.

Meta has mounted a safety bug that allowed Meta AI chatbot customers to entry and think about the personal prompts and AI-generated responses of different customers.

Sandeep Hodkasia, the founding father of safety testing agency AppSecure, solely instructed Trendster that Meta paid him $10,000 in a bug bounty reward for privately disclosing the bug he filed on December 26, 2024.Β 

Meta deployed a repair on January 24, 2025, mentioned Hodkasia, and located no proof that the bug was maliciously exploited.

Hodkasia instructed Trendster that he recognized the bug after analyzing how Meta AI permits its logged-in customers to edit their AI prompts to regenerate textual content and pictures. He found that when a consumer edits their immediate, Meta’s back-end servers assign the immediate and its AI-generated response a singular quantity. By analyzing the community visitors in his browser whereas modifying an AI immediate, Hodkasia discovered he may change that distinctive quantity and Meta’s servers would return a immediate and AI-generated response of another person solely.

The bug meant that Meta’s servers weren’t correctly checking to make sure that the consumer requesting the immediate and its response was licensed to see it. Hodkasia mentioned the immediate numbers generated by Meta’s servers have been β€œsimply guessable,” doubtlessly permitting a malicious actor to scrape customers’ authentic prompts by quickly altering immediate numbers utilizing automated instruments.

When reached by Trendster, Meta confirmed it mounted the bug in January and that the corporate β€œdiscovered no proof of abuse and rewarded the researcher,” Meta spokesperson Ryan Daniels instructed Trendster.

Information of the bug comes at a time when tech giants are scrambling to launch and refine their AI merchandise, regardless of many safety and privateness dangers related to their use.

Meta AI’s stand-alone app, which debuted earlier this yr to compete with rival apps like ChatGPT, launched to a rocky begin after some customers inadvertently publicly shared what they thought have been personal conversations with the chatbot.Β 

Techcrunch occasion

San Francisco
|
October 27-29, 2025

Latest Articles

Elon Musk teases a new image-labeling system for X… we think?

Elon Musk’s X is the newest social community to roll out a characteristic to label edited photographs as β€œmanipulated...

More Articles Like This