AI in Cybersecurity Toolkit

Resources for cybersecurity educators

Introduction

Artificial Intelligence (AI) is rapidly reshaping the cybersecurity landscape, both as a tool for defense and as a weapon for offense. For educators, this dual role creates an urgent need to prepare students not only to harness AI for protection but also to understand how adversaries may exploit it in attacks.

On the defensive side, AI is already embedded in professional security environments, impacting all the different phases of the cyber incident lifecycle (e.g., prevention, preparedness, response and recovery). It powers log analysis, anomaly detection, malware investigation, and even awareness training, giving defenders greater speed, accuracy, and scalability. In the classroom, AI also opens new teaching opportunities by automating assessments, generating case studies, simulating real-world incidents, and designing interactive exercises that help students grasp complex cybersecurity concepts more effectively.

At the same time, AI is driving a new generation of offensive cyber operations. Malicious actors are weaponising generative AI to automate reconnaissance, personalise phishing campaigns, accelerate vulnerability discovery, or deploy adaptive malware. This transformation of the cyber kill chain has intensified the scale and sophistication of attacks worldwide, from ransomware to deepfakes and swarm malware. AI is thus both an enabler of cyberattacks and itself a target of adversarial exploitation, with vulnerabilities such as data poisoning and adversarial examples posing new risks.

This toolkit was developed by Virtual Routes as part of the Cybersecurity Seminars Programme supported by Google.org, to provide teachers and students with resources in a constantly evolving field. It is based on a survey of participating universities and provides materials to help understand the impact of AI on cybersecurity, presenting its dual role as both a defensive and offensive tool.

Impacts of AI on Cybersecurity Skills

The European Cybersecurity Skills Framework (ECSF) defines twelve key professional cybersecurity roles, together with the tasks, skills, knowledge, and competences needed across the sector. These roles range from technical functions such as threat intelligence and penetration testing to broader tasks such as risk management and education. We identified five main ways in which AI impacts the skills and competencies required for these roles:

Data analysis and threat intelligence

ECSF: Cyber Threat Intelligence Specialist; Digital Forensics Investigator

LLMs can accelerate the collection, correlation and summarisation of large volumes of threat reports, logs and indicators of compromise. Analysts are still required to validate findings, but their focus shifts from repetitive parsing to critical interpretation.

Incident detection and response

ECSF: Cyber Incident Responder

AI can assist in anomaly detection, triage and initial reporting. Skills in validating alerts, contextualising incidents and deciding on proportional responses become increasingly important.

Risk assessment and compliance

ECSF: Cyber Legal, Policy & Compliance Officer; Cybersecurity Risk Manager; Cybersecurity Auditor

AI can support automatic classification of sensitive data and preliminary risk scoring. Practitioners must apply judgement to assess whether AI-driven outputs align with regulatory and organisational requirements.

Secure development and code review

ECSF: Cybersecurity Implementer; Cybersecurity Architect; Penetration Tester

AI-enabled code scanning highlights insecure patterns and proposes fixes. Professionals remain responsible for ensuring secure coding practices and for mitigating the risk of AI hallucinations or false positives as AI may also create insecure code.

Education and awareness

ECSF: Cybersecurity Educator; Cybersecurity Researcher

LLMs enable the generation of adaptive training scenarios, synthetic datasets and automated feedback. The educator’s skillset evolves towards curating, validating and embedding AI resources responsibly into curricula.

Core AI competencies for cybersecurity

The ubiquitous adoption of AI requires all cybersecurity professionals, whatever their role, to develop new skills that contribute to the responsible, adaptive and effective use of AI tools. These core AI competences extend beyond familiarity with specific tools, and instead focus on the underlying capabilities required to work effectively in this rapidly evolving environment:

1. Foundational understanding
  • AI literacy to understand the capabilities and limitations of AI, and to integrate it safely into workflows without over-reliance or misplaced trust.
  • Ethical awareness to identify risks relating to bias, privacy, accountability and security, ensuring that AI systems are deployed in ways that uphold professional and societal standards.
2. Evaluation and oversight
  • Critical evaluation to assess AI-generated outputs against trusted sources and contextual expertise, recognising when further validation or human judgement is required.
  • Explainability and transparency to interpret AI outputs, interrogate “black box” models, and communicate results clearly to both technical and non-technical stakeholders, thereby strengthening trust in AI-assisted decisions.
  • Resilience and human oversight to design safeguards that prevent over-reliance on automation, ensuring robust safeguards and preserving human responsibility for critical decisions.
3. Risk and data management
  • Data governance to ensure the quality, diversity and security of data used in AI systems, understanding that poor data management can introduce systemic vulnerabilities.
  • AI risk management to anticipate and mitigate AI-specific risks such as hallucinations, adversarial manipulation, insecure code generation and data poisoning, embedding these considerations within broader cyber risk frameworks.
4. Future-facing adaptability
  • Continuous learning to update skills, monitor emerging threats, and engage with new developments in AI applications for cybersecurity.
  • Scenario thinking and foresight to anticipate how advances in AI may reshape technical, organisational and strategic levels of cybersecurity, and prepare professionals to respond to future challenges proactively.
5. Collaboration and communication
  • Interdisciplinary collaboration to work effectively with experts in law, policy, psychology and ethics, recognising that the responsible use of AI requires perspectives that extend beyond purely technical domains.
  • Communication and trust-building to explain AI-enabled decisions with clarity and nuance, sustaining trust among all stakeholders involved.

AI-driven automation of repetitive or lower-value tasks has raised urgent questions about workforce transformation and potential job displacement. However, while some analyst tasks may diminish, new demands arise around supervising AI outputs, validating findings, and addressing AI-specific risks such as hallucinations, insecure code generation, or adversarial manipulation. Rather than eliminating cybersecurity roles, AI shifts the skill profile towards oversight, governance, and human-AI collaboration.

Ethical and responsible use of AI in cyber defence

By optimising time, efficiency and resources, AI enables defenders to do more with less, lowering barriers to entry and strengthening the ability to detect and respond to increasingly complex cyber threats. As cyber incidents grow in scale and sophistication, AI’s ability to process large amounts of data makes it indispensable. However, over-reliance on AI outputs introduces new vulnerabilities, particularly when those outputs are inaccurate or lack contextual understanding, raising several questions about ethics and responsible use:

Key Ethical Concerns

Bias and discrimination
AI models trained on biased or unbalanced datasets may unfairly flag certain user groups or regions as malicious. For example, cybersecurity researchers trained an intrusion detection system on historic attack data and found that it produced 30% more false positives for users in underrepresented regions, while balanced training led to fairer results. Similarly, AI may over-prioritise familiar attack types while underestimating emerging threats, creating defence gaps.
Ensure non-discriminatory outcomes by addressing algorithmic and data bias.
Monitoring and surveillance
AI-driven security requires large-scale monitoring of network traffic, login attempts and user behaviour, creating detailed digital footprints. This constant surveillance risks undermining user trust and raising consent issues. Moreover, long-term data retention increases the chances of breaches, and cloud-based processing raises questions over cross-border data governance.
Safeguard personal and organisational data, respect consent, and minimise unnecessary collection.
Autonomous decision-making and unintended consequences
Automated measures such as account lockouts, IP blocking or network shutdowns may have unacceptable false positive or false negative rates, especially when automated decisions are not sufficiently informed by relevant context. In an experiment conducted by cybersecurity researchers, AI-based systems successfully blocked 92% of threats but wrongly flagged 8% of legitimate activity as malicious. Such errors risk disrupting critical services, for instance in finance or healthcare, and complicate accountability for harm caused.
Maintain human-in-the-loop mechanisms and clearly assign responsibility for AI-driven outcomes.
Opacity of AI models
Many AI systems function as “black boxes”, providing little insight into how they reach conclusions. In cybersecurity, this lack of explainability can make it difficult for analysts to understand why legitimate traffic is flagged or why certain threats are prioritised, which can undermine trust and delay effective responses.
Make AI decision-making processes clear and interpretable for stakeholders.
To put these principles into practice, organisations can draw on a combination of regulatory and technical tools to improve the trustworthiness of AI systems:
Regulatory measures include compliance frameworks such as the EU Artificial Intelligence Act (AI Act), which introduces risk-based obligations, fundamental rights impact assessments and accountability mechanisms for high-risk AI systems. Other regulatory measures include algorithmic impact assessments to evaluate risks before deployment, compliance with data protection laws such as the GDPR and CCPA, and accountability frameworks that allocate liability for AI-related errors. The development and adoption of internationally recognised standards and certifications provide additional compliance tools that help operationalise legal obligations, promote trust, and, to a certain extent, drive innovation by giving organisations the opportunity to experiment with product development within pre-determined guardrails.
Technical solutions include fairness-aware machine learning methods, bias detection and mitigation techniques, privacy-enhancing technologies such as encryption and anonymisation, as well as explainable AI approaches that make decision-making processes more transparent. Human-in-the-loop oversight and continuous monitoring of models further ensure that automated systems remain accurate, ethical and aligned with organisational and societal values.

Survey methods and data

Virtual Routes conducted an online survey of 27 participating educators from universities across Europe. The questionnaire aimed to determine whether they currently use AI in their cybersecurity teaching, how they use it, their reasons for doing so, the specific tools and tasks involved, and whether they apply AI in the context of cybersecurity support provided to local community organisations (LCOs). Although not statistically significant, the responses provide insight into current practices and expectations, highlighting both the opportunities and challenges associated with integrating AI into cybersecurity education. The survey was supplemented by follow-up interviews to reach a better understanding of practical use cases.

A few key takeaways can be highlighted:

Early but growing use of AI

Of the 27 respondents, most (22) said they were already experimenting with AI tools in their teaching, particularly in cybersecurity seminars organised by Google.org. However, adoption is still in its early stages and is often limited to specific tasks rather than systematic integration. Five respondents indicated that they were not yet using AI.

Varied use cases

The most common applications involve general writing and data collection/analysis tasks (using common LLMs), data synthesis, and specialised cybersecurity tasks such as anomaly detection, attack surface mapping, malware analysis, and hands-on labs.

Motivations for adoption

Cybersecurity educators primarily use AI to help students prepare for the workplace, to support self-directed learning and self-assessment, and to save time on pedagogical tasks such as exercise creation, grading, and content generation. Many also see the value of using AI to illustrate key cybersecurity concepts and scenarios.

Application in community services

Approximately half of respondents (15 out of 27) stated that they were already using or planned to use AI to provide cybersecurity support to local community organisations (LCOs), a key aspect of the Google.org Cybersecurity Seminars. This demonstrates a growing link between the exploration of AI tools in the classroom and their application in real-world community contexts.

Educators’ needs and expectations

Several respondents expressed interest in receiving guidance and examples on how to effectively integrate AI into cybersecurity education. They highlighted the need for best practices, shared resources, and case studies to move from experimentation to more structured and effective use of AI in teaching and service delivery.

How to Navigate the Toolkit

This toolkit is organised into two parts, reflecting the dual role of AI in cybersecurity.

AI in cyber defence

How AI changes cyber defence across the cyber incident lifecycle:

The first part examines the impact of AI on cyber defence, outlining how AI tools support cyber defense across the incident lifecycle of prevention, preparedness, response, and recovery. It highlights concrete applications such as attack surface mapping, anomaly detection, and secure code development, and illustrates these with case studies and references for further study.

AI in cyber offence

How AI changes the Cyber Kill Chain:

The second part addresses the impact of AI on cyber offence, focusing on how AI reshapes the Cyber Kill Chain. It considers how AI enables attackers to automate and enhance stages such as reconnaissance, weaponisation, and delivery, while also introducing novel forms of attack. Case examples and further readings provide context for understanding these developments.

Glossary of terms

How You Can Contribute

Are you using an open-source AI solution to train students on cybersecurity, or do you have other publicly available resources to share for teaching about AI and cybersecurity (whether AI-based or not)?

We’d love to hear from you. Please email us at co*****@vi************.org , we’ll share your contributions with the wider community and ensure this toolkit is kept up to date.

Thank you for signing up to our newsletter!

Thank you! RSVP received for AI in Cybersecurity Toolkit

AI in Cybersecurity Toolkit

Loading...

Loading…