News
8 min read

AI won’t replace cyber teams. It will redefine what great teams look like.

Reflections from Gerasimos Marketos, Chief Product Officer at Hack The Box (HTB), following his RSAC 2026 interview on the Security by Default podcast.

May 14, 2026
Hack The Box Article

Listen to the full RSAC 2026 interview with Gerasimos Marketos on the Security by Default podcast.

At RSAC 2026, it was impossible to miss the theme dominating the show floor: AI is everywhere.

For cybersecurity leaders, that brings both promise and pressure. AI can accelerate investigations, streamline repetitive work, support threat analysis, and help teams move faster in environments where speed increasingly determines outcomes. But it also introduces new risks, new attacker behaviours, and new questions about what cyber teams need to know in order to stay relevant.

In his live interview at RSAC 2026 on the Security by Default podcast, Gerasimos Marketos, Chief Product Officer at HTB, discussed exactly this shift: how AI is reshaping cybersecurity skills, why hands-on learning matters more than ever, and why the future is not humans versus AI, but skilled humans working with AI.

The biggest takeaway is clear: AI is an accelerator, not a replacement. And in cybersecurity, acceleration without skill is not enough.

AI changes the pace, but skills still determine the outcome

During the interview, Gerasimos reflected on HTB’s recent research comparing human teams and AI-enabled teams in cybersecurity benchmark environments. The latest benchmark brought together around 1,500 human teams and 150 AI teams, with the AI side operating in a human-assisted model.

The findings point to a more nuanced reality than the familiar “AI will replace humans” narrative.

AI can dramatically increase productivity, especially for skilled practitioners. In competitive environments where time matters, AI can help teams move faster, analyze outputs more efficiently, and explore possible paths at greater speed. But it does not remove the need for expertise. As Gerasimos noted in the discussion, practitioners still need to know what questions to ask, what prompts to use, and how to reason through what comes back. That distinction is critical.

AI can help a team move faster, but it does not automatically make the team better. Just as a speed boost in a racing game does not make someone a better driver, AI does not replace the underlying skill required to understand the environment, make decisions, and adapt when the path is unclear. In fact, the more capable AI becomes, the more important that underlying judgment becomes.

From human-in-the-loop to human-on-the-loop

The next stage of AI in cybersecurity is not only about automation. It is about autonomy. Security teams are no longer just experimenting with tools that summarize alerts or recommend next steps. They are beginning to work with systems that can chain actions together, support workflows, reason across inputs, and operate with increasing independence. That changes the human role.

In the human-in-the-loop model, people approve or check each step. That made sense when AI systems were mainly assistants. But as AI agents become more autonomous, humans cannot manually validate actions without creating new bottlenecks. The target state is increasingly human-on-the-loop: skilled practitioners supervising autonomous systems, challenging their outputs, providing context, and intervening when needed. This is not a reduction in the importance of people. It is a shift in what people must be able to do.

Cyber professionals will need to understand when to trust an AI recommendation, when to ask better questions, when to constrain the system, and when to override it entirely. They will need the architectural knowledge and business context to judge whether a technically correct answer is operationally safe.

AI may identify the vulnerability. The skilled human understands the system.

AI for security and security for AI

One of the key points Gerasimos raised is that organisations need to think about AI from two directions.

The first is AI for security: how cyber professionals use AI in their daily work as penetration testers, defenders, analysts, and response teams. AI can help them become more effective and efficient. It can support learning, speed up analysis, and help practitioners explore complex environments.

The second is security for AI: how organisations test, attack, defend, and assess AI-powered systems themselves. As AI becomes embedded in products, workflows, and enterprise infrastructure, security teams need to understand new risks around AI systems, agentic behaviours, governance, orchestration, and misuse.

Both sides matter. A team that only learns how to use AI tools may miss the risks those tools introduce. A team that only studies AI threats without learning how to work effectively with AI may fall behind attackers and competitors who are already using it to move faster. The future requires both: the ability to secure AI, and the ability to use AI securely.

The skills gap is changing shape

Traditional cybersecurity education has always struggled to keep up with the pace of technology. By the time a curriculum is built, published, and taught, the tools and tactics may already have changed. AI intensifies that problem.

As Gerasimos discussed, AI is especially good at automating and streamlining lower-level tasks. These are often the same tasks where new cybersecurity professionals build foundational knowledge: basic investigation, enumeration, triage, pattern recognition, and hands-on repetition.

That creates a real challenge for the industry. If AI automates the entry-level work where practitioners traditionally learn, organisations need new ways to build the same depth of understanding. Otherwise, they risk creating teams that can operate tools, but cannot reason independently when the tools are unavailable, wrong, or insufficient.

This is why hands-on training becomes more important in the AI era, not less.

Cybersecurity has never been a purely theoretical discipline. Practitioners need to practise. They need realistic environments, ambiguous problems, adversarial pressure, and space to fail safely. They need to learn not only what an answer looks like, but how to get there, how to test assumptions, and how to recover when the first path does not work. AI can support that learning, but it cannot replace the need for it.

Readiness cannot be point-in-time anymore

Another theme from RSAC was the gap between AI claims and AI readiness.

Everywhere, vendors are talking about AI. But for security leaders, the question is not whether a tool is “AI-powered.” The question is whether teams, tools, and agents can perform when the pressure is real. That requires continuous readiness.

The threat landscape is changing too quickly for annual assessments, static learning, or one-off exercises to be enough. Models change. Agents update. Attack techniques evolve. Defensive workflows shift. A team that was prepared last year may not be ready for the way AI is being used this year.

Organisations need to benchmark and validate readiness continuously: the readiness of their people, the readiness of their AI-enabled workflows, and the readiness of the agents and tools they increasingly depend on.

This is where hands-on platforms play a critical role. They give teams a way to move beyond theory and test capability in realistic scenarios. They also give individuals a way to prove what they can do, not just list what they have studied. That has been part of the HTB story from the beginning.

From Academy and guided learning paths, to Labs, Pro Labs, CTFs, defensive experiences, and talent discovery, HTB has always been built around practical proof of skill. As AI changes cybersecurity, that practical proof becomes even more important.

The future belongs to superhuman teams

The most effective organisations will not be the ones that simply automate the most work. They will be the ones that build teams capable of using AI well. That means developing what we might call agentic dexterity: the ability to work confidently alongside AI agents, delegate the right tasks, interpret outputs, validate recommendations, and maintain control when systems operate at speed.

It also means preserving the fundamentals. Networking, Linux, Active Directory, detection engineering, incident response, exploitation, threat hunting, and systems thinking do not disappear because AI enters the workflow. They become the foundation that allows practitioners to supervise AI effectively.

AI can make a skilled defender faster. It can make a capable team more productive. It can help practitioners go further than they could alone. But AI without skill creates a productivity illusion. It may look fast, but it lacks the judgment to know when the answer is wrong, incomplete, or dangerous. That is why the future of cybersecurity is not human-only or AI-only. It is skilled humans augmented by AI.

Train for the era that is arriving

Cybersecurity is entering a new phase. AI-powered attacks, AI-enabled defence, agentic workflows, and AI governance are becoming part of the same operating reality. For HTB the mission is to help individuals and organisations prepare for that reality through hands-on, practical, continuously evolving skill development.

That means helping teams understand new AI-driven threats. It means helping practitioners use AI as part of their day-to-day work. It means giving organisations ways to benchmark human and AI readiness before a real incident exposes the gap.

AI is not the end of cyber skills. It is the reason cyber skills must evolve faster. The teams that will lead in this era will be those that can answer three questions:

  1. Can we defend against AI-enabled threats?

  2. Can our people work effectively with AI agents?

  3. Can we prove our readiness before attackers test it for us?

At RSAC 2026, the message was clear: AI is already changing cybersecurity. The next question is whether the workforce is ready. The answer will not come from automation alone. It will come from skilled people, hands-on practice, continuous validation, and human judgment on the loop. That is how cyber teams become superhuman.