|
|
|
|
August 21, 2025
|
Hackers Infiltrate Alleged North Korean Operative’s Computer, Leak Evidence of...
|
|
August 21, 2025
|
Ecosia Proposes Unusual Stewardship Model for Google Chrome
|
|
August 21, 2025
|
OpenAI Presses Meta for Evidence on Musk’s $97 Billion Takeover Bid
|
|
August 15, 2025
|
ChatGPT Mobile App Surpasses $2 Billion in Consumer Spending, Dominating Rivals
|
|
|
Meta Patches AI Chatbot Security Flaw That Exposed Users’ Private Prompts
July 15, 2025
Meta recently fixed a significant security vulnerability in its AI chatbot platform that allowed users to access private prompts and AI-generated responses belonging to other users. This flaw raised serious privacy concerns about how user data is handled within Meta’s AI services.
The bug was discovered and privately reported by Sandeep Hodkasia, founder of the security testing firm AppSecure, on December 26, 2024. Meta responded by awarding Hodkasia a $10,000 bug bounty as recognition for responsibly disclosing the issue. After a thorough investigation, Meta deployed a fix on January 24, 2025, and reported no evidence suggesting the flaw was exploited maliciously during the period it was active.
Hodkasia uncovered the vulnerability while analyzing how Meta AI’s system handles prompt editing. When users modify prompts to regenerate text or images, the backend assigns a unique identifier to each prompt and its associated AI-generated output. By inspecting his browser’s network traffic and manipulating this identifier, Hodkasia demonstrated that he could retrieve prompts and responses tied to other users’ accounts. The root cause was that Meta’s servers failed to verify whether the requesting user had the proper authorization to view the specific prompt data. Additionally, these unique prompt identifiers were “easily guessable,” meaning that an attacker could automate sequential requests and scrape large amounts of private user content.
Meta confirmed the issue and the timely resolution but emphasized that it had found no evidence of abuse linked to this vulnerability. Ryan Daniels, a Meta spokesperson, told TechCrunch, “We fixed the bug in January, found no indication of malicious activity, and rewarded the researcher for their responsible disclosure.”
This security lapse comes amid intense competition among tech giants to rapidly develop and roll out AI-powered products, often grappling with the challenge of safeguarding user privacy and security. Meta AI’s standalone app, launched earlier in 2025 as a competitor to popular AI platforms like ChatGPT, faced its own share of early difficulties — including users unintentionally sharing conversations they believed to be private, highlighting the delicate balance between innovation and security in AI development.
Overall, the incident underscores the critical importance of robust access controls and security measures as AI chatbots become increasingly embedded in everyday digital experiences.
|
|
|
Sign Up to Our Newsletter!
Get the latest news in tech.
|
|
|