A researcher from AppSecure found a privacy flaw in Meta AI
There was a troubling hiccup with Meta's AI chatbot that could have let users peek into one another's private prompts and AI-generated conversations. But, luckily for us, Sandeep Hodkasia, the genius founder of AppSecure, spotted this security flaw. In fact, he stumbled upon this while doing a deep dive into how Meta AI allows users to edit their AI prompts, which in turn, creates text and images.
What he unearthed was pretty serious. When users tweaked their prompts, Meta's servers assigned these exchanges a unique number. Hodkasia found out that by simply playing around with that unique number while examining browser network traffic, he could access another user's prompt and AI-generated response.
Plain speak – this meant that Meta's servers were not double-checking if the person requesting to see the prompt and its response were authorized to do so. It didn't help that the unique numbers generated by Meta’s servers were easy to guess – increasing the risk of misuse by unscrupulous entities.
Meta paid Hodkasia a cool $10,000 for bringing this to their attention, private bug disclosure style. The issue was resolved with a fix deployed on January 24, 2025. According to Hodkasia and a Meta spokesperson, Ryan Daniels, there's no evidence pointing towards any malicious use of this bug.
This comes at a time when tech biggies are vying to perfect their AI offerings while grappling with the inherent security and privacy threats that comes with it.
You may also remember the not-so-smooth sailing for Meta AI's stand-alone app, debuted earlier. It collided with a sea of problems after users unintentionally shared conversations they presumed were private with the chatbot.