Cyber Teams
diskordia,
Jul 29
2025
From threat detection to autonomous scripting, artificial intelligence is rewiring the way we defend systems, respond to incidents, and even simulate attacks. But here’s the thing: while threat actors are steamrolling ahead with AI-backed exploits, too many teams are still figuring out how to integrate AI meaningfully (and securely) into their workflows.
This year’s Global Cyber Skills Benchmark saw 795 teams and over 4,500 players work through a whole host of challenges—including brand new AI and Machine Learning (ML) categories.
Read on to get the lowdown on what the data from that CTF tells us, from rising AI skills demand and where your team might be falling, to the potential risks that come with integrating AI into your systems.
AI this, AI that—sometimes, it can feel like a game of buzzword bingo. Rest assured, AI in cybersecurity is much more than that: it’s operational, tactical, and already being employed by both sides of the security arms race. Security teams are already using AI tools to do things like:
Automate noisy triage tasks.
Write and decode scripts faster.
Enhance red team tooling with autonomous behavior.
Detect anomalies that traditional rule-based systems might miss.
The CTF data highlights this shift in mindset. Teams solved AI and ML-themed challenges at average rates of 37% and 30.1% respectively, outpacing more ‘traditional’ areas like Secure Coding (18.7%) and Web Security (21.1%).
But solving challenges around AI as a threat doesn’t necessarily mean that teams know how to use AI as a tool internally. Recognizing the threat is a vital first step; but using AI as a force multiplier to augment workflows, enhance decision-making, and improve offensive simulation—that’s the next big step.
This is where HTB’s AI Red Teamer Path comes into play. It supports the need for a new hybrid skillset, focusing on simulating adversarial AI attacks, probing model vulnerabilities, and uncovering dangerous capabilities. Built in collaboration with Google, and aligned with the Secure AI Framework (SAIF), this training path empowers professionals with the hands-on, scenario-based skills they need to test AI the same way attackers would.
For all the progress, the report also highlights risk zones, AKA places where the AI-skills gap could morph into a real liability.
In Government, teams scored just 17.8% in AI challenges, despite aggressive public sector investment in AI-driven threat operations. Case in point: US Cyber Command has launched a dedicated AI program, complete with a roadmap, task force, and pilots to fast-track adoption. The mismatch between strategy and workforce skills could be a barrier to execution.
Education fared worse, with 0% in Secure Coding and negligible scores in AI domains—raising red flags for an industry meant to feed the future cyber workforce.
Energy & Utilities had decent scores in legacy systems and forensics, but modern AI-readiness hovered at 28%, underscoring the need to bridge old and new.
Even high-performing sectors like Finance and Retail showed inconsistencies: decent AI solve rates on paper, but persistent gaps in foundational skills like Web and Cloud security. Translation? Without a secure base, AI becomes a force multiplier for attackers.
When teams lack AI proficiency, they’re less efficient and more vulnerable to attackers who are already confident AI users. These chinks in the armor could look like:
GenAI phishing and impersonation tactics bypassing detection.
Prompt injection and adversarial ML attacks hitting real-world apps.
Over-reliance on AI without secure coding knowledge, creating subtle but dangerous flaws
These threats are out there right now. But on the flipside of that are those top-performing teams using AI tools to their advantage. They’re the ones training AI agents to support triage, or chaining AI models in red team tooling. They’re using LLMs to decode, generate, and translate more complex syntax.
This is a snapshot of what it means to use AI as a force multiplier rather than just another tool. Mastering this mindset is the key to pulling your team forward.
Download the full 2025 Global Cyber Skills Benchmark Report
So how are leading teams actually using AI today? The report offers real-world signals:
SOC augmentation: AI agents assist with triage, correlation, and noise reduction, freeing analysts to focus on critical incidents.
Red team innovation: Teams build autonomous payloads, generate obfuscated code, and simulate adversarial behavior using AI.
DevSecOps acceleration: Secure boilerplate code, IaC auditing, and vulnerability pattern recognition; AI is changing how we build.
With that in mind, we’ve created the HTB Model Context Protocol (MCP). This allows teams to integrate AI agents directly into CTF challenges, offering secure, self-served AI agent integration in CTFs, accessible directly through the tools your team is already using.
But looking to the future, we can also consider developing integrations like SIEM augmentation, agent-driven scripting, even autonomous red teaming—training that reflects what tomorrow’s SOC or purple team will actually look like.
If your current training plan treats AI like an optional topic, it’s time to rethink. Here’s how to embed AI upskilling into your workforce development strategy:
Start with fundamentals: Teach your team to evaluate AI outputs, not just generate them. Secure Coding remains critical—especially in AI-augmented workflows.
Invest in targeted training: Focus on building AI-first capabilities, blending AI fundamentals with adversarial tooling and hands-on lab work that mimic real-world use cases. HTB’s AI Red Teamer Path does a great job here, and paired with the right CTF, your training goes from theoretical to tactical.
Organize AI-enabled CTFs: Events like HTB’s benchmark CTF offer hands-on experience with AI-powered tasks and adversarial simulations.
Incorporate CTEM: Continuous Threat Exposure Management (CTEM) helps quantify your AI-related risks—bridging the gap between theory and action.
Remember: training should be practical, contextual, and frequent. Static content won’t cut it in an evolving AI landscape.
Find out how our AI Red Teamer Path can help
Based on the report, here’s what’s likely to define the next 12–24 months in AI-infused security:
Adversarial ML becomes mainstream; offensive and defensive roles will both need fluency
AI-native workflows emerge across SOC, IR, and AppSec teams
Hiring priorities shift toward hybrid skills: AI + cloud, AI + red team, AI + dev
The leaders won’t just be the ones who buy the best tools. They’ll be the teams that know how to build with, train alongside, and defend against intelligent systems.
AI skills are no longer a nice-to-have. They’re the next must-have in cybersecurity. Whether you’re managing risk at the org level or running day-to-day operations, your team’s ability to understand, use, and defend against AI could determine your next breach, or your next big win.
Don’t just react. Get ahead. Check out 2025 Global Cyber Skills Benchmark Report to benchmark your team’s skills, explore AI training strategies, and chart your course to future-proof security.