By: Josh Bartolomie, Chief Security Officer
The Illusion of Readiness
I have spent a good portion of my career inside and around security awareness programs, watching them evolve from afterthought compliance checkboxes to multimillion-dollar platform investments. Along the way, I have seen a pattern repeat itself more times than I can count: organizations running the same simulation templates year after year, cycling through the same lure types, refreshing the same dated templates, and quietly gravitating toward content that generates favorable metrics. I understand why it happens. Programs need to demonstrate results, leadership wants to see numbers improve, and auditors want to see evidence that training occurred. The path of least resistance is to run campaigns you know employees will eventually learn to recognize.
The uncomfortable truth is that when a program optimizes for outcomes rather than preparedness, it stops being a security control and starts being a reporting artifact. There is a meaningful difference between an organization that has genuinely reduced its phishing risk and susceptibility and one that has simply gotten very good at passing its own tests.
Here is where I have seen it break down: the simulation content often bears little resemblance to what adversaries are sending to a specific organization or industry. Phishing remains the dominant initial access vector of organizational compromises. Breaches keep happening. Executives keep asking why. Boards keep asking why. In many cases, the answer is sitting right there in the training program.
The Simulation Trap
Generic phishing simulations, recycled fake invoice emails, HR policy update lures, 'your password is expiring' notices, were built to be recognizable. To be clear, they do represent real phishing tactics at the most basic level. But the intent was never to replicate what a sophisticated threat actor could or does actually send. They were designed to fail predictably, generate a report, and demonstrate to auditors that training happened and a minimum baseline example of what a phishing attack could look like.
To be fair, the field has matured a great deal over the last several years. Many organizations have moved away from the more generic templates and are actively working to build relevant and realistic simulation content. That progress is real and worth acknowledging. But even with improved templates, the underlying dynamic has not fully resolved itself. Employees still learn the rhythm of the test. Click rates still get treated as the primary success metric. In the background, real campaigns are getting smarter, more targeted, and more convincing faster than most simulation libraries are being updated.
Verizon's 2025 Data Breach Investigations Report, based on over 12,000 confirmed breaches, tells the story across multiple dimensions. Phishing appears directly as the initial access vector in 16% of breaches. Stolen credentials, which are substantially phishing-sourced (the DBIR itself acknowledges that phishing frequently generates the credentials that become the recorded entry point), account for another 22%. Broaden the lens to the human element as a whole and that figure climbs to roughly 60% of all breaches.[1] Cisco Talos' Q1 2025 Incident Response Trends, drawn from real-world IR cases rather than survey data, puts phishing involvement in initial access at 50% of engagements.[2] If simulation programs were working the way we assume, those numbers would be trending differently.
When an employee fails a simulation built on a one or two year-old lure template, you have not measured anything meaningful about their ability to handle what is actually hitting their inbox this week. You have measured how well they recognize your testing infrastructure.
The Intelligence Gap
Threat actors are not standing still while defenders run quarterly simulation campaigns. The adversarial toolkit has expanded dramatically, and the most significant accelerant has been AI.
IBM's Cost of a Data Breach Report 2025 puts phishing at the number one initial attack vector for the first time, with an average breach cost of $4.88 million per incident.[3] The attacks behind those breaches are not the clumsy, typo-laden emails that defined the genre a decade ago. Security researchers have documented AI-generated phishing at scale: hyper-personalized messages that reference real projects, mimic executive communication styles, and leverage context that looks legitimate because it is drawn from public sources like LinkedIn, earnings calls, and press releases.
When the gap between what employees are trained to recognize and what adversaries are actually sending widens this much, the simulation program is not a defense. It is a confidence measure for a threat that no longer exists at scale.
The answer is not running more simulations. It is running the right ones, built on intelligence from actual campaigns targeting your industry, your sector, and your peers.
Rethinking What You Are Actually Measuring
Here is the diagnostic question I would push any security leader to ask: what does your program tell you when an employee does the right thing?
Most simulation programs are designed to detect failure: click rates, credential submission rates, time-to-click. These are useful data points, but they are one-dimensional. They tell you something went wrong. They do not tell you anything about the detection and reporting capability you have actually built.
The metric that matters most, and the one most programs fail to measure well, is the report rate. I have spent considerable time on this problem from the inside, including work that resulted in patented approaches to automated threat processing of employee-reported suspicious email. Not just whether an employee avoided a click, but whether they saw something suspicious, recognized it as a potential threat, and reported it through the right channel.
That distinction is the difference between an employee who passes a test and an employee who is a functional part of your defense.
From Compliance Artifact to Intelligence Layer
Security teams that receive timely, employee-reported phishing have measurably shorter dwell times than those relying on automated detection alone. That outcome does not happen by accident. It happens when employees are trained on content that looks like what adversaries actually send, realistic lures, current techniques, industry-specific context, so that genuine recognition capability is built, not pattern-matching to a familiar test format.
The SANS 2024 Security Awareness Report found that programs aligned to actual threat intelligence show significantly stronger behavioral outcomes than generic simulation programs.[4] NIST SP 800-50 Rev 1, updated in 2024, explicitly calls for role-based, threat-relevant awareness training rather than one-size-fits-all approaches.[5] The regulatory and research consensus has moved: threat-informed training is no longer a nice-to-have differentiation. It is the baseline expectation for a defensible program.
The goal is to build what I would describe as a distributed human sensor network, a workforce that is trained, contextualized, and connected to the SOC through a reporting mechanism that actually works. When that exists, your people are not a liability to be managed through click rate suppression. They are an active intelligence layer.
Closing the Loop
The shift from simulation-for-compliance to threat-informed training requires connecting three things that often operate in silos: the threat intelligence coming into your organization, the simulation content your employees are trained on, and the reporting pipeline that gets suspicious emails in front of analysts.
When those three are aligned, something changes. Employees who see a real phishing attempt are not guessing. They recognize it, because they have trained on content that reflects how adversaries actually operate today, not how they operated when the simulation library was last updated. When they report it, that report has value. It becomes an input into detection. It closes the loop.
The central question every security leader should be able to answer honestly: if your employees are being trained on threats that no longer reflect what adversaries are sending, what exactly are you measuring, and does your board understand the difference between a low click rate and an organization that is actually harder to compromise?
Those are not the same thing. The organizations that figure that out first will have a meaningful advantage over those still optimizing for a test no one is trying to beat.
References
[1] Verizon, "2025 Data Breach Investigations Report." Based on 12,195 confirmed breaches; phishing as direct initial access vector in 16% of breaches; stolen credentials in 22%; human element involved in approximately 60% of all breaches.
[2] Cisco Talos, "IR Trends Q1 2025." Phishing involved in initial access in 50% of incident response engagements analyzed.
[3] IBM / Ponemon Institute, "Cost of a Data Breach Report 2025." Phishing ranked #1 initial attack vector for the first time; $4.88M average breach cost per phishing-initiated incident.
[4] SANS Institute, "2024 Security Awareness Report." Threat-intelligence-aligned training programs show significantly stronger behavioral outcomes.
[5] NIST SP 800-50 Rev 1 (2024). Updated federal guidance calls for role-based, threat-relevant awareness training over one-size-fits-all simulation approaches.