AI Health Chatbots on Android Raise Fresh Data Privacy Alarms

AI Health Chatbots on Android Raise Fresh Data Privacy Alarms
X

Millions of Android users may unknowingly risk exposing sensitive health conversations through vulnerable AI chatbot apps lacking proper security updates.

Thousands of Android apps powered by AI chatbots are now under scrutiny after a new report flagged serious security vulnerabilities that could expose highly personal user data — including private health conversations.

According to a report published by Bleeping Computer, citing findings from cybersecurity firm Oversecured, several AI-based health and mental wellness apps on Android contain vulnerabilities that may allow attackers to access sensitive user information. These apps, collectively downloaded more than 15 million times, are said to have “1000s of vulnerabilities” that could potentially be exploited by hackers.

The findings once again put the spotlight on Google’s ongoing battle against unsafe app behaviour on the Play Store. With AI chatbots becoming increasingly popular for mental health support, therapy guidance, and general wellness advice, millions of users are sharing deeply personal information through these platforms — often without fully understanding how securely that data is handled.

The report suggests that attackers could exploit weaknesses in these apps to steal login credentials, intercept private chat histories, or even identify and target specific users. The risks are particularly concerning given the nature of the data involved. While some may argue that health-related data is less critical than financial information, experts warn that in today’s data-driven ecosystem, any personal information can be misused.

Health conversations, emotional disclosures, therapy notes, and behavioural patterns are valuable not only for cybercriminals but also as training data for AI systems. Without strong encryption, secure storage practices, and regular security patches, such information could be vulnerable to unauthorised access.

Another worrying detail highlighted in the report is that many of these apps have not received timely updates. Some reportedly have not been updated since late 2024 or earlier, leaving known security gaps unpatched. Even more concerning, several of the developers behind these vulnerable apps have yet to publicly confirm the reported exploits or clarify whether fixes are underway.

Security researchers warn that users may not receive any visible alerts if their data is compromised. In many cases, vulnerabilities can be exploited silently, without triggering system warnings from Android or Google Play Protect. This makes it difficult for everyday users to detect whether their information has been accessed.

The rapid rise of AI-driven health support tools has created new convenience for users seeking private and accessible guidance. However, this incident underscores a growing reality: as AI applications handle more intimate aspects of daily life, the responsibility for safeguarding user data becomes even more critical.

Until clearer assurances and updates are provided, users may need to exercise extra caution when choosing AI health apps. Checking update histories, reviewing app permissions, and limiting the amount of sensitive information shared online can serve as immediate protective steps.

As AI adoption accelerates, data security cannot be an afterthought — especially when it involves something as personal as health.



Next Story
Share it