According to TechSpot, Austrian researchers from the University of Vienna exploited WhatsApp’s web application interface to systematically identify more than 3.5 billion active WhatsApp accounts worldwide. The researchers found that roughly 57% of enumerated users had profile photos visible, while about 29% had publicly visible “about” text accessible. In countries like India and Brazil, the exposure rates were even higher, with 62% of nearly 750 million Indian accounts displaying public images. The researchers were able to process over 100 million verification checks per hour without any hacking required, using only WhatsApp’s intended functionality. Meta only implemented stronger controls roughly six months after being notified, leaving billions of user records potentially accessible during that period.
How the scraping worked
Here’s the thing that’s really concerning: the researchers didn’t need to break anything. They simply automated what any user could do manually – checking if phone numbers were registered on WhatsApp. The web interface lacked meaningful rate-limiting, meaning they could blast through millions of numbers per hour without hitting any barriers. Basically, they treated WhatsApp’s contact discovery system like a giant phone book and just read through the entire thing systematically. The fact that this scale of data harvesting was possible using documented features should worry anyone who values privacy.
Meta’s slow response
Now, this isn’t even the first time this issue has come up. A similar vulnerability was documented back in 2017, and Meta’s response then was basically “users should adjust their privacy settings.” Six months to implement basic rate-limiting after being handed a dataset of 3.5 billion accounts? That’s an eternity in security terms. And their public statement focusing on how “no messages were exposed” feels like missing the point entirely. When you can determine who uses WhatsApp in countries where it’s banned, or build massive targeting databases for scammers, that’s already a serious problem.
The real privacy risk
Look, the scary part isn’t just that someone could see your profile picture. It’s about what happens when you combine this data at scale. Think about it: hostile governments could identify WhatsApp users in regions where the app is banned. Scammers could build highly targeted lists. And the researchers found millions of accounts in China and Myanmar – places where simply having WhatsApp installed has led to government scrutiny. The research paper shows how even “public” data becomes dangerous when you can harvest it systematically across billions of users.
Fundamental design flaw
So what’s the solution? Meta is testing username-based sign-on, which would help, but the core issue is that convenience often trumps security in mass-market apps. WhatsApp’s entire growth strategy relied on making it dead simple to find contacts – just enter a phone number and boom, you’re connected. But that same simplicity creates a massive attack surface. The researchers argue that rate-limiting alone can’t fully solve this, and they’re probably right. When you design systems for billions of users, you have to assume someone will try to abuse them at scale. The fact that it took academic research to prompt real changes suggests we’re still not taking these systemic risks seriously enough.
