AI Chat App Exposes 300 Million Messages from 25 Million Users

February 10, 2026
AI Chat App Exposes Messages
AI Chat App Exposes Messages

The popular mobile application “Chat & Ask AI” has inadvertently exposed hundreds of millions of private user conversations.

The app, which boasts over 50 million users across the Google Play and Apple App stores, failed to secure its backend database, allowing unauthorized access to sensitive user data.

The leak stemmed from a misconfiguration on the Google Firebase platform, which developers use to build mobile apps. While Firebase is a standard tool, it requires careful setup to ensure security.

In this case, the settings were left in a default state that allowed anyone to designate themselves as an “authenticated” user. This simple loophole granted access to the app’s backend storage.

The scale of the leak is massive. The researcher reported access to approximately 300 million messages belonging to more than 25 million users.

According to 404media reports, the exposed database contained comprehensive logs of user activity, including: Full histories of conversations with the AI. Timestamps of when chats occurred.

Custom names users gave to their AI companions. Specific configurations and the type of AI model used (such as ChatGPT, Claude, or Gemini). The content of these messages highlights the severe privacy implications of the breach.

An analysis of a sample data set comprising 60,000 users and one million messages revealed deeply personal and potentially dangerous inquiries.

Users had asked the AI for instructions on how to manufacture illegal drugs like methamphetamine, how to hack other applications, and, most disturbingly, advice on suicide and writing suicide notes.

“Chat & Ask AI” functions as a “wrapper” app. This means it doesn’t run its own AI brain; instead, it connects users to powerful models from major companies like OpenAI, Google, and Anthropic.

While the underlying AI models (such as ChatGPT) were not compromised, the wrapper app served as a weak link, storing conversations insecurely.

Users are advised to be cautious about the personal information they share with third-party AI tools and to review app permissions and reputations carefully.

Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.

Original article can be found here