Gone are the days when phishing was a numbers game with modest returns. Traditional phishing campaigns saw a 12% success rate, requiring significant manual effort for each attempt. But artificial intelligence (GenAI, and sometimes other ML/DL tricks) has rewritten these rules entirely. In a controlled study of 101 participants, AI-generated phishing emails matched human experts with a 54% success rate. Even more remarkably, when humans and AI collaborated, the success rate nudged up to 56%. This wasn't just better emails – the AI system demonstrated an uncanny ability to gather accurate target information from the web (OSINT), with an 88% success rate in building accurate profiles from public data. Perhaps the most striking finding is the dramatic reduction in effort required. Traditional targeted attacks required: ➖ 23.5 minutes of research per target ➖ 10.2 minutes crafting each email ➖ Total time: 34 minutes per attempt The AI system collapsed this to just one minute total. Even with human oversight, the process took only 2.7 minutes – a 92% reduction in time invested. This efficiency creates a troubling economic reality. With a typical conversion rate of 2.35% (the percentage of clicked links that lead to successful exploitation), AI automation reduces costs by up to 50 times. The mathematics become profitable at surprisingly low numbers – just 2,859 targets for high-success scenarios. Even with minimal conversion rates of 0.6%, the economics work at scale. The same Gen AI technologies have potential for defence: ➖ Claude 3.5 Sonnet achieved a 97.25% detection rate ➖ Zero false positives in legitimate email detection ➖ Successfully caught sophisticated attacks that fooled human reviewers We're entering an era where AI will dominate both attack and defence, be cheap and plentiful for attackers while defenders with AI skillsets will become gold. Machine speed cybersecurity through cognitive, network and identity layers will become standard. Welcome to the brave new world.
Email attack simulation study results
Explore top LinkedIn content from expert professionals.
Summary
Email attack simulation study results refer to research findings from simulated phishing and email attack scenarios designed to test both human and technical defenses against malicious emails. These studies reveal how organizations and individuals respond to targeted attacks, measure the impact of training, and highlight changing tactics from attackers, including the use of artificial intelligence.
- Prioritize diverse testing: Incorporate different types of phishing lures and difficulty levels in simulations to reflect real-world threats and measure true vulnerability.
- Track and analyze outcomes: Monitor click rates, reporting patterns, and behavioral trends to identify weaknesses and areas for improvement across your workforce.
- Build collective resilience: Encourage fast reporting and team-based awareness, as group dynamics can help catch attacks before disaster strikes even when individual training is limited.
-
-
What if I told you that even after training thousands of employees, we can’t reliably stop them from clicking on phishing emails? That’s exactly what a new large-scale study suggests (Source: https://lnkd.in/dhWazB2h). Researchers worked with a fintech firm, ran phishing simulations on 12,511 participants, used two training modes (lecture vs interactive + exercises), and measured outcomes using a rigorous standard: the NIST Phish Scale. They found: Phishing difficulty matters - a lot. As lures got harder, click rates jumped from ~7 % on “easy” ones to ~15 % on “hard” ones. But training? It made no statistically significant difference in reducing clicks or raising reporting rates. Interestingly, in some “campaigns” the workforce collectively showed resilience - reports preceded clicks (“inoculation patterns”), even though individual training wasn’t effective. The effect sizes of training were extremely small (< 0.01) - meaning even where training had some effect, it likely doesn’t move the needle in real operations. That said, the NIST Phish Scale proved useful: it reliably predicted user behavior across difficulty tiers. 🧠 What this means over cocktails: ➡️ Don’t overpromise on training - phishing awareness programs are still useful, but we must be honest about their limits. They’re not magical shields. ➡️ Use risk-based simulations - incorporate standardized difficulty frameworks (like NIST Phish Scale) so your tests reflect real threats - not toy phishing emails. ➡️ Design for collective resilience, not just individuals - the notion that “somebody will raise the flag before disaster” is powerful. Encourage reporting, feedback loops, and fast incident response - because the group dynamics matter. ➡️ Defenses must be multi-layered - human factors alone won’t save us. Email filtering, URL rewriting, strong authentication, real-time threat intel - these need to carry the bulk of the load. ➡️ Measure honestly & iteratively - track how training & controls perform over time. Compare investments (training vs technical) by real metrics - not vanity stats. Awareness is useful, but it’s not a silver bullet. Build collective resilience and measure honestly. This is your Cyber Aperitivo. Sip smart, stay cyber sharp 🍸 #CyberSecurity #Phishing #SecurityAwareness #HumanFactors #DefenseInDepth
-
70% of staff at this £18Bn IT giant were clicking on phishing links. 12 months later, we cut it down to 20%. I led the transformation as CISO. Here's exactly how we did it step by step: We had a workforce of 300,000 people across 150+ countries in 2021. Different cultures, different languages, different inbox habits, but one common problem: Inbox fatigue. Hundreds of emails a day meant people stopped thinking. If it hit their inbox, they opened it. And when it looked remotely legitimate? They clicked. Even the most senior execs — the ones with the most sensitive data — were falling for the bait. As CISO, I wanted to help fix this laissez-faire view without humiliating anyone. Here's how: Step 1: Awareness Training We launched 5-10 minute micro-learning modules that dove into • why phishing exists • what criminals get out of it • the tell-tale signs (bad grammar, lookalike domains, letter swaps in emails, etc.) The lessons were practical and related to life at business and at home. People finished the module knowing exactly what to check before clicking. Step 2: Realistic & Layered Phishing Simulations Now it was time to test everyone. We started with easy simulations and built complexity over time: → Simple: obvious scams like “Nigerian prince” emails → Intermediate: fake brand offers from Apple or similar → Advanced: MS login pages so convincing they fooled seasoned IT staff Every “fail” on the simulation triggered an instant education page showing exactly what they missed. We sent them in waves over 14 hours, with multiple variations so colleagues couldn’t tip each other off. We used the local language in each country and avoided dirty tricks like fake bonus announcements. Step 3: Tracking the Data We built in features in each email that helped us track: • link clicks • email opens • data entered (and whether it was real or spoof) • reports (via a “report phishing” button) This helped us see where someone stopped in the chain and reward them for correct reporting. Step 4: Analyzing & Reporting Findings We analysed the data above by country, seniority, and cultural trends. Key Findings: • Colleagues in some cultures were more likely to open emails 'just in case' it was from a boss. • Some countries will not allow phishing simulations at all. • Execs were the WORST offenders. With this info, we moved onto: Step 5: Education & Implementing Solutions On top of the built-in education pages, we hosted workshops with repeat offenders to • dissect the email together • point out red flags they missed For the top-level execs who were still clicking after a year, we held direct coaching sessions — explaining that with their access came the highest stakes. –– By the end of the programme: • Click rate: 70% → 20% • Dramatic increase in phishing reports • A cultural shift where questioning suspicious emails became the norm (post continued in the comments below)
-
Incident Response Case Study using Azure Data Explorer and KQL - Tracing a Multi-Stage Phishing Attack Recently, I completed a full email forensics & incident response investigation simulating a real-world enterprise phishing attack — from initial access to lateral movement and endpoint compromise. This exercise wasn’t about alerts alone. It was about thinking like an attacker, validating hypotheses with data, and stitching together evidence across multiple telemetry sources. 🧩 What I Investigated Using Microsoft Defender Advanced Hunting (KQL) and Azure Data Explorer, I analyzed: ✔ Email telemetry (EmailEvents) ✔ User interaction data (UrlClickEvents) ✔ Sender reputation & domain spoofing ✔ User behavior before and after compromise 🚨 Key Findings (High Level) 🔹 Initial Access A phishing campaign originated from a spoofed IT helpdesk domain mimicking the company’s real domain. ➡️ Classic credential-harvesting via lookalike domains. 🔹 Credential Compromise & Lateral Movement Multiple users interacted with the phishing URLs. Compromised accounts were then used to send internal-looking emails to bypass trust barriers. 🔹 Attack Escalation One compromised user sent a malicious attachment disguised as a security update, which was opened by the final victim — resulting in endpoint compromise. 🔹 Clear ATT&CK Mapping T1566 – Phishing T1078 – Valid Accounts T1204 – User Execution Kill Chain: Delivery → Exploitation → Lateral Movement → Impact 🛠️ What I Practiced Technically ✅ Advanced KQL correlation across multiple tables ✅ Timeline reconstruction using joins on NetworkMessageId ✅ IOC extraction & pivoting ✅ Detection of domain spoofing & credential phishing ✅ Mapping findings to MITRE ATT&CK & Cyber Kill Chain ✅ Writing detection logic suitable for SOC automation 🎯 Takeaway Threat hunting isn’t about single alerts. It’s about connecting weak signals across time, identity, and behavior. This exercise reinforced why: - Strong KQL skills matter - Context > volume - Detection engineering is as important as response If you’re working in SOC, DFIR, Threat Hunting, or Cloud Security, I’d love to exchange notes. #CyberSecurity #SOC #SecurityEngineering #ThreatHunting #IncidentResponse #KQL #BlueTeam #DetectionEngineering #MITREATTACK #CloudSecurity #DFIR
-
+2
-
I ran a phishing simulation for our clients. 80% of their employees shared their passwords. Context: We sent an Amazon Pay gift card email to our client’s employees Message: 'You were one of our best performers in 2025! Here’s your reward!' The link goes to a domain we control, where we track each event. So far, the numbers look like this. - We sent about 11,236 emails. - Around 6650 people opened them. - About 5615 clicked the link. - And close to 2589 entered their credentials That’s a success rate of 70 to 80%. The security drills that popular security vendors run are predictable. For ex: 'Forgot password emails or IT administrator emails'. People recognise the pattern and stay alert. Real attacks look for timing and emotions. A gift card during Christmas gives a small dopamine hit. You don’t overthink it. That’s exactly what attackers count on. When we run campaigns at ApniSec, we think like attackers and design them time and effort spent on the story. After the campaign, we share a detailed report with leadership. If this kind of testing feels uncomfortable, it usually means it’s revealing something important. Think this approach could help your team? Let's have a quick chat. P.S: I have added the results of our campaign in the comments. Do check it out.
-
Abstract "This paper empirically evaluates the efficacy of two ubiquitous forms of enterprise security training: annual cybersecurity awareness training and embedded anti-phishing training exercises. Specifically, our work analyzes the results of an 8-month randomized controlled experiment involving ten simulated phishing campaigns sent to over 19,500 employees at a large healthcare organization. Our results suggest that these efforts offer limited value..." Key Findings -Annual Awareness Training: --No significant correlation between recently completed annual training and reduced phishing simulation failures. --Phishing failure rates were consistent regardless of the time elapsed since the last training. -Embedded Phishing Training: --While training reduced failure rates slightly, the improvement was modest (average reduction of 1.7% in failure rates). --High variability in phishing lure efficacy (1.8% to 30.8% failure rates), often overshadowing the benefits of training. --Users often spent minimal time on training material; more than half spent less than 10 seconds. -Training Engagement: --Only 24% of users completed the training after failing simulations. --Interactive training reduced future phishing failure rates by 19%, but static training showed negligible benefits and sometimes increased failure rates for frequent participants. -Behavioral Insights: --Most users will eventually fall for a phishing attack despite initial success in simulations. --Current training primarily targets those who fail simulations, leaving many users untrained and susceptible. https://lnkd.in/eczWVrHr
-
I ran a phishing simulation on a 30-person company last month. The email looked like a SharePoint document sharing notification. Totally normal. The kind of thing their team sees every single day. Results: • 11 out of 30 employees clicked the link (37%) • 7 entered their credentials on the fake login page (23%) • 3 of those 7 were managers with elevated permissions • Average time from email delivery to first click: 4 minutes Four minutes. In a real attack, an attacker with those credentials could: → Access SharePoint and OneDrive files → Send emails as that person → Move laterally through the organization → Deploy ransomware within hours Here’s what changed everything for this company: 1. Microsoft Defender for Office 365 — catches phishing before it hits inboxes 2. Conditional Access — blocks sign-ins from suspicious locations 3. Attack Simulation Training — built into Microsoft 365 E5/Defender After 90 days of monthly simulations + Defender tuning, their click rate dropped to 4%. From 37% to 4%. That’s not a tool upgrade. That’s a culture shift. Want to know how your team would score? Comment PHISH and we’ll set up a complimentary simulation. #PhishingAwareness #CyberSecurity #MicrosoftDefender #SmallBusiness #SecurityAwareness
-
“After sending 10 different types of phishing emails over the course of eight months, the researchers found that embedded phishing training only reduced the likelihood of clicking on a phishing link by 2%. This is particularly striking given the expense in time and effort that these trainings require, the researchers note. Given the results of the study, researchers recommend that organizations refocus their efforts to combat phishing on technical countermeasures. Specifically, two measures would have better return on investment: two-factor authentication for hardware and applications, as well as password managers that only work on correct domains, the researchers write.” https://lnkd.in/eC4aQZzc
-
When Phishing Simulations Backfire: Why Click Rates Don’t Tell the Whole Story A large-scale study out of San Diego, involving nearly 20,000 employees, put phishing simulations with follow-up training to the test. The takeaway? This approach isn’t just ineffective—it may actually backfire. In the first month, about 10% of employees clicked on a phishing email. By the eighth month, more than half had fallen for at least one. The research also showed how much context matters. An Outlook password reset email barely fooled anyone—just 1.82% clicked. But a message about changes to the vacation policy? That tricked 30.8%. So if you’re measuring the success of your security awareness program purely by click rates, be careful. The numbers might not reflect progress at all—they might just reflect how tempting the template was https://lnkd.in/dWvvzewS Alexandre Nunes Barros Douglas Mauricio Capellato Marcelo Nunes Barros Gustavo Marques Ken Fanger Hacker Rangers Security Awareness Danny Ng Cintia Takeda João Leonidas Silvana Batista Arrais
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development