346K Medical Records And Passports Compromised In AI Chatbot ‘WotNot’ Security Failure
A recent data breach involving Indian-based AI startup WotNot left over 346,000 personal files exposed online, putting the sensitive data of customers at risk. Cybersecurity researchers at Cybernews discovered the exposed data in August during a “routine investigation using OSINT methods.” A misconfigured Google Cloud Storage bucket containing over 346,000 files was accessible to anyone online without authorization.
The leaked data included passports and national IDs, detailed medical records including diagnoses and test results, resumes containing employment histories and contact information, and other files such as travel itineraries and railway tickets. The data, originating from WotNot’s 3,000-strong customer base, poses a serious risk of identity theft, fraud, and phishing schemes.
WotNot’s Response
WotNot, which provides chatbot development services to healthcare, finance, and education industries, attributed the breach to a misstep in cloud storage policies. The exposed bucket was reportedly used by users of their free-tier plan.
“The cause for the breach was that the cloud storage bucket policies were modified to accommodate a specific use case, WotNot told Cybernews. “However, we regretfully missed thoroughly verifying its accessibility, which inadvertently left the data exposed.”
Third Parties and Shadow IT
The company noted that its enterprise customers operate on private instances with stricter security protocols. It also claimed to recommend that clients delete sensitive files after transferring them to their systems—a practice not strictly enforced. The incident highlights the risks of incorporating third-party vendors into the AI ecosystem. With chatbots collecting sensitive user data, any weak link in the supply chain can lead to catastrophic breaches.
According to Cybernews, AI services introduce a new shadow IT resource, which is outside the organization’s direct control. “In WotNot’s case, sensitive information that originated from their business clients ended up exposed,” Cybernews researchers explained, “showing how one security lapse at a single vendor can compromise data from multiple companies and thousands of individuals downstream.”
Experts advise users to think twice before sharing personal information with AI chatbots, especially on platforms that may involve multiple vendors. Businesses are urged to exhaustively vet their partners’ security policies before going into business with them.
Learn how AI can be used on both sides of the cybersecurity equation, by hackers and cybersecurity teams alike.
The post 346K Medical Records And Passports Compromised In AI Chatbot ‘WotNot’ Security Failure appeared first on eWEEK.