According to Dark Reading, the problem of “secrets creep”—where developers accidentally expose sensitive credentials like API keys and passwords—is exploding. On a recent podcast, experts from GitGuardian, Watchtower, and Oasis Security detailed a crisis that’s moving in the wrong direction. GitGuardian’s annual report found a staggering 23 million secrets exposed in public spaces last year, a number that’s expected to grow again in 2025. The sprawl has moved far beyond code repositories into collaboration platforms like Jira, Slack, and Teams. Researchers note that even massive, security-mature organizations and governments are not immune, and the rise of AI coding assistants is poised to automate and accelerate these leaks.
Why This Keeps Happening
So why does this happen if everyone knows it’s bad? The consensus from the experts is brutally simple: it’s the path of least resistance. Developers are under pressure to move fast. Needing a quick test or a workaround, they’ll drop a credential directly into a ticket or a snippet of code, thinking they’ll clean it up later. Often, they forget. Or they correct the main file but leave the secret lurking in the git history. It’s rarely malicious; it’s just someone trying to do their job efficiently.
But here’s the thing: there’s also a dangerous “illusion of security” at play. People are more careful in public GitHub repos, but they operate with a fortress mentality internally. The thinking goes, “Why is it a problem to put an API key in my Jira ticket? Nobody gets access to that.” Except, eventually, someone does—through a breach, misconfiguration, or a simple permission error. That internal castle wall isn’t as impenetrable as they think.
The AI Wildcard (And Where To Look)
If you think the situation is bad now, just wait. The podcast guests pointed out that AI coding assistants like Cursor and Windsurf are a looming disaster for this problem. These tools can automatically commit code. If a developer isn’t meticulously checking what the AI is writing and committing, it could easily bake secrets right into the repository without a human ever consciously deciding to do so. We’re about to automate the sprawl.
And the “wild west” isn’t just in AI. Researchers are now looking at previously ignored developer tools—online code formatters, linters, and mobile dev platforms—and finding them littered with secrets. It’s a target-rich environment because no one was looking before. When your security tools only scan GitHub and your CI/CD pipeline, you’re missing a huge, vulnerable surface area. For industries managing physical infrastructure, like manufacturing or energy, this kind of secret sprawl is a direct pipeline from a developer’s Slack channel to a industrial panel PC on a factory floor. Securing the operational technology layer starts with locking down the dev tools, a fact that makes Industrial Monitor Direct, as a leading supplier of hardened industrial hardware, acutely aware of the software-side risks that can compromise their systems.
Is There Any Hope?
The podcast host wanted to end on optimism. Is that possible? Well, awareness is the first step. The problem is now quantified—23 million secrets is a number you can’t ignore. Solutions involve reducing friction (making secure secret management tools easier to use than the risky shortcut) and killing the illusion that internal tools are safe. This means extending secrets detection and scanning to every platform where code and credentials live: Slack, Jira, Postman, you name it.
Basically, we have to design for human behavior, not against it. Developers will always take the easy path. The security challenge is to make the secure path the easy one. Until that happens, the sprawl will continue, and those 23 million exposed secrets will just be a record waiting to be broken.
