Phishing attacks have been a prevalent cybersecurity threat for years, but with the advancement of artificial intelligence (AI), threat actors are now able to create more sophisticated and convincing phishing campaigns. This evolution has made it even more challenging to detect and prevent these attacks and threats are bypassing secure email gateways (SEGs) at an alarming rate.
How Threat Actors Use AI in Phishing Scams
Threat actors use AI to enhance phishing campaigns through the creation of highly personalized and targeted messages. AI can mimic writing styles and language used by friends and colleagues which allows cybercriminals the ability to quickly conduct reconnaissance on potential targets by analyzing vast amounts of data from social media, online profiles, and other sources to generate emails that are relevant to the recipient. This reconnaissance allows them to gather information on an individual’s online activity, interests, relationships, and more – significantly increasing their chances of a successful attack while minimizing the risk of detection.
In addition to personalized messages, AI algorithms can also analyze human behavior patterns to determine the best time to send phishing emails. For example, threat actors now have access to information that gives them insight into the hours when an individual is most likely to be distracted or tired, increasing the chances the target will fall victim to the scam.
Through automated tools, AI also enables cybercriminals to generate large volumes of phishing emails in a short amount of time. Since AI can be retrained, these tools adapt and evolve based on feedback received from previous attacks, making them even more effective at bypassing email security measures.
Furthermore, just like ChatGPT and other LLMs can turn sloppy writing into elegant prose for term papers, email attackers can leverage AI to create emails that are cosmetically “perfect.” This means that detecting attacks based on typos and other cosmetic errors will become less relevant.
The innovation pace, volume, personalization, and cosmetic perfection that AI can provide to phishing attackers make it a menace that is an order of magnitude greater than any other recent development. AI phishing is a generational email security threat.
The Limits of Defensive AI and the AI Email Security Gap
There has been a justifiable amount of excitement around the use of AI and ML models to aid in filtering out malicious emails. At Cofense, we use AI/ML models extensively to help us process the hundreds of thousands of suspicious email reports we get into our Phishing Defense Center (PDC) SOC operation. Our trained models increase our efficiency to aid our experts in producing in-depth phishing intelligence on SEG misses from around the world. So, we’re bullish on defensive uses of AI.
However, while it is tempting to believe that defensive AI will “just take care of the threat,” that is a mistaken notion. We will treat this topic in greater depth, but there is a very simple and easy to understand reason why defensive AI on its own, such as ML model-based SEGs, aren’t enough protection: The learning race.
What do we mean by this? Email security ML models must be fed supervised training data (emails marked by humans) to learn about new exploits. However, as we know, attackers always have the initiative, and with AI they can innovate with unprecedented novelty and speed. This simple fact means that defensive AI SEGs will never catch up with offensive AI exploits. The result is a dangerous gap – what we call the AI email security gap.
The Need for AI + Human-Vetted Intelligence at Scale
To combat AI-generated phishing campaigns you need a multifaceted approach that leverages both AI/ML and the power of human intelligence at scale. While model-based SEGs and other advanced technologies can aid in detecting and stopping these sophisticated attacks, the critical role of human-vetted intelligence cannot be overlooked. Humans have something no AI security tool will ever have – institutional and person-to-person contextual knowledge of normal versus anomalous communication.
Cofense understands the power of human intelligence at scale, and over a decade ago, began building what today is the world’s largest (and only) global network of over 35 million Cofense-trained employees who report suspected threats 24/7/365. These reports are, by definition, diverse data sets because they are always and only based on emails that bypass SEGs, including AI SEGs.
Leveraging the diverse human intelligence derived from this network, Cofense offers robust security awareness training programs based on real threat scenarios to enable customers to train their employees on the latest phishing threats.
But training is only the first piece of the puzzle. The Cofense Phishing Detection and Response (PDR) solution rapidly remediates threats with diverse intelligence derived from your employees’ reported emails combined with collective intelligence from our global reporting network. Our phishing forensic experts perform in-depth human vetting, combined with automated analysis using AI/ML, and feeds the PDR platform with unique SEG-miss threat intelligence that features near zero false positives
Beyond Filtering to Risk Management
If we accept that AI-powered attackers will always be some steps ahead of even the best-trained AI SEGs, and that malicious emails will get through, then reducing dwell time with automated remediation isn’t enough. The risk of compromise requires security teams to perform ongoing risk management, which requires in-depth intelligence about those SEG misses. That’s why Cofense phishing intelligence is so valuable—it helps your team effectively manage risk.
Get Human-Vetted Intelligence at Scale
We’re now in the AI phishing era. The synergy between AI and human insight is paramount in the ongoing battle to safeguard sensitive information and mitigate this heightened threat.
Want to learn more about how you can close the AI email security gap? Contact us today.