For years, the cybersecurity conversation around human error centered on stolen credentials, misconfigurations, and social engineering.
But in today’s AI-driven development landscape, human error hasn’t disappeared—it’s just moved upstream.
And developers are now at the center of that risk.
For two years, one statistic has defined the conversation:
74% of breaches involve human error, according to the 2023 Verizon Data Breach Investigations Report [2023, Verizon DBIR].
Historically, that meant phishing clicks, password reuse, or unsecured S3 buckets.
But with generative AI accelerating software creation, that risk has shifted more to the people writing the code.
AI copilots have turned software authorship into a rapid-fire process—but one where context is often missing.
Who wrote this? What tool did they use? Was it reviewed? Was it secure?
These are the new questions defining software risk.
And the answers live with the developer.
Developers now sit at the convergence of human decision-making and AI automation.
They are no longer just executing logic—they are curating, interpreting, and deploying AI-generated code at scale.
But security teams continue to focus on artifacts—code, configs, commits—while overlooking the authors behind them. That’s where risk begins:
This isn’t an edge case—it’s the new normal of software creation.
Despite the critical role developers play, their security posture is rarely tracked, assessed, or managed.
Why?
Because the system isn’t designed for them.
Developers are incentivized to ship—not secure.
Security teams are trained to analyze artifacts—not authors.
This mismatch leads to:
The outcome?
An insecure SDLC where empowerment exists without alignment.
We’re already seeing the consequences of this visibility gap.
These weren’t failures of tooling.
They were failures of process, culture, and shared accountability.
To address this new wave of risks, organizations must understand the behaviors driving them.
Common—and often invisible—issues now include:
There’s also a serious education gap.
Secure coding is often not part of formal training. Developers are left to figure it out themselves—often relying on forums or code snippets that may introduce even more risk. And when training is provided, it’s frequently outdated, irrelevant, or disconnected from real-world development contexts, as noted in the DevSecOps Maturity Model for Secure Software Development [2024, Gartner, Aaron Lord et al.].
Insecure behaviors don’t just happen post-deployment—they originate at the moment of authorship. That’s why static audits and point-in-time scans fall short.
What’s needed is continuous, contextual visibility into how developers write, review, and secure code.
Solving this isn’t about adding more tools.
It’s about changing what—and who—we observe.
Developers don’t need more alerts. They need:
It’s no surprise developers are pushing back.
Many feel like they’re being handed yet another prescription—more tools, more scans, more tickets—instead of real solutions.
It’s as if each new control is just another pill, layered on top of a growing regimen they never asked for.
But here’s the hard truth: if developers want fewer prescriptions, they need to recognize themselves as authors of risk—not just endpoints in a security workflow.
That starts with visibility into how they write code, use AI, manage access, and respond to risk.
When security becomes a conversation about performance, not punishment, developers don’t get buried in controls.
They get trusted at the source.
These examples serve as a wake-up call—not to assign blame, but to recognize where organizations have misinterpreted empowerment as unstructured autonomy.
If developers are to be the front line of defense in modern software, they must be supported—not just with tools, but with visibility, and a culture that values secure craftsmanship.
This isn’t about bloating the stack.
It’s about providing just enough of the right support, while fostering a culture where developers take ownership of their performance, learn from mistakes, and continuously grow.
But let’s be clear: empowerment is not the same as autonomy without oversight.
In the name of developer experience, many teams have introduced ungoverned flexibility—decentralized tooling, fragmented workflows, skipped controls.
The result? Short-term acceleration that feels like velocity—until it collapses into blind spots, rework, and unmanageable complexity.
True empowerment requires shared visibility.
Not to control developers—but to enable accountability, alignment, and mutual trust.
Developers and security leaders must work from the same source of truth—on posture, performance, and progress.
This isn’t surveillance. It’s transparency with purpose—the same kind elite teams use to improve, not punish—and win championships.
Because without accountability, empowerment becomes fragility—not freedom.
At Archipelo, we believe software security doesn’t start in production—it starts at the keyboard.
That’s why we created Developer Security Posture Management (DevSPM): to close the critical blind spot where software is actually made.
In today’s AI-accelerated development landscape, the creation process itself has become a source of enterprise risk.
DevSPM provides:
We don’t treat developers as liabilities.
We treat them as the fifth pillar of modern software security.
Because in the era of human and AI co-authorship, every developer action is a security signal—and every decision shapes the future of software security.
By Kacper Skawiński, Product Lead & Matthew Wise, CEO & Cofounder at Archipelo
Ready to bring visibility to the most critical layer of your SDLC?
→ Book a live demo and see how Archipelo helps teams align velocity, accountability, and security at the source.
Archipelo helps organizations ensure developer security, resulting in increased software security and trust for your business.
Try Archipelo Now