If you thought Meta’s AI platform was rock-solid just because it’s backed by billions, think again. A cybersecurity expert recently exposed a serious flaw in Meta AI that allowed strangers to peek into private chatbot prompts and replies.
His reward? A cool $10,000 from Meta’s bug bounty program.
The bug was discovered by Sandeep Hodkasia, founder of AppSecure, who reported it back in December 2024. TechCrunch reports that Meta has since patched the issue and found no signs of it being misused – but that doesn’t mean users should let their guard down.
Hodkasia uncovered the vulnerability while analyzing how users can edit prompts to get updated AI responses. During testing, he noticed Meta’s backend assigned each prompt a unique but predictable ID. By intercepting and altering this identifier, it became possible to view someone else’s private prompt history. Worse, there was no validation to check if the user had the right to access those records.
That’s a major slip-up, especially considering the kinds of things people input into AI – from legal concerns to personal confessions and even sensitive financial questions.
This incident adds to a growing list of privacy complaints aimed at Meta AI. Since launching its standalone app earlier this year, users have accidentally made private interactions public due to confusing sharing settings. Some unwittingly shared deeply personal queries, photos, and audio clips with the world.
Despite Meta pouring resources into its AI, the platform’s adoption has been lukewarm. According to Appfigures, the Meta AI app has only seen around 6.5 million downloads since its debut on April 29.
So, while Meta says all is fine now, this episode is a reminder: no system is immune. If AI can spill your secrets because of one lazy line of code, maybe think twice before treating chatbots like a diary.