AI in Cybersecurity Toolkit
Resources for cybersecurity educators
Introduction
Artificial Intelligence (AI) is rapidly reshaping the cybersecurity landscape, both as a tool for defense and as a weapon for offense. For educators, this dual role creates an urgent need to prepare students not only to harness AI for protection but also to understand how adversaries may exploit it in attacks.
On the defensive side, AI is already embedded in professional security environments, impacting all the different phases of the cyber incident lifecycle (e.g., prevention, preparedness, response and recovery). It powers log analysis, anomaly detection, malware investigation, and even awareness training, giving defenders greater speed, accuracy, and scalability. In the classroom, AI also opens new teaching opportunities by automating assessments, generating case studies, simulating real-world incidents, and designing interactive exercises that help students grasp complex cybersecurity concepts more effectively.
At the same time, AI is driving a new generation of offensive cyber operations. Malicious actors are weaponising generative AI to automate reconnaissance, personalise phishing campaigns, accelerate vulnerability discovery, or deploy adaptive malware. This transformation of the cyber kill chain has intensified the scale and sophistication of attacks worldwide, from ransomware to deepfakes and swarm malware. AI is thus both an enabler of cyberattacks and itself a target of adversarial exploitation, with vulnerabilities such as data poisoning and adversarial examples posing new risks.
This toolkit was developed by Virtual Routes as part of the Cybersecurity Seminars Programme supported by Google.org, to provide teachers and students with resources in a constantly evolving field. It is based on a survey of participating universities and provides materials to help understand the impact of AI on cybersecurity, presenting its dual role as both a defensive and offensive tool.
Impacts of AI on Cybersecurity Skills
The European Cybersecurity Skills Framework (ECSF) defines twelve key professional cybersecurity roles, together with the tasks, skills, knowledge, and competences needed across the sector. These roles range from technical functions such as threat intelligence and penetration testing to broader tasks such as risk management and education. We identified five main ways in which AI impacts the skills and competencies required for these roles:
Data analysis and threat intelligence
LLMs can accelerate the collection, correlation and summarisation of large volumes of threat reports, logs and indicators of compromise. Analysts are still required to validate findings, but their focus shifts from repetitive parsing to critical interpretation.
Incident detection and response
AI can assist in anomaly detection, triage and initial reporting. Skills in validating alerts, contextualising incidents and deciding on proportional responses become increasingly important.
Risk assessment and compliance
AI can support automatic classification of sensitive data and preliminary risk scoring. Practitioners must apply judgement to assess whether AI-driven outputs align with regulatory and organisational requirements.
Secure development and code review
AI-enabled code scanning highlights insecure patterns and proposes fixes. Professionals remain responsible for ensuring secure coding practices and for mitigating the risk of AI hallucinations or false positives as AI may also create insecure code.
Education and awareness
LLMs enable the generation of adaptive training scenarios, synthetic datasets and automated feedback. The educator’s skillset evolves towards curating, validating and embedding AI resources responsibly into curricula.
Core AI competencies for cybersecurity
The ubiquitous adoption of AI requires all cybersecurity professionals, whatever their role, to develop new skills that contribute to the responsible, adaptive and effective use of AI tools. These core AI competences extend beyond familiarity with specific tools, and instead focus on the underlying capabilities required to work effectively in this rapidly evolving environment:
- AI literacy to understand the capabilities and limitations of AI, and to integrate it safely into workflows without over-reliance or misplaced trust.
- Ethical awareness to identify risks relating to bias, privacy, accountability and security, ensuring that AI systems are deployed in ways that uphold professional and societal standards.
- Critical evaluation to assess AI-generated outputs against trusted sources and contextual expertise, recognising when further validation or human judgement is required.
- Explainability and transparency to interpret AI outputs, interrogate “black box” models, and communicate results clearly to both technical and non-technical stakeholders, thereby strengthening trust in AI-assisted decisions.
- Resilience and human oversight to design safeguards that prevent over-reliance on automation, ensuring robust safeguards and preserving human responsibility for critical decisions.
- Data governance to ensure the quality, diversity and security of data used in AI systems, understanding that poor data management can introduce systemic vulnerabilities.
- AI risk management to anticipate and mitigate AI-specific risks such as hallucinations, adversarial manipulation, insecure code generation and data poisoning, embedding these considerations within broader cyber risk frameworks.
- Continuous learning to update skills, monitor emerging threats, and engage with new developments in AI applications for cybersecurity.
- Scenario thinking and foresight to anticipate how advances in AI may reshape technical, organisational and strategic levels of cybersecurity, and prepare professionals to respond to future challenges proactively.
- Interdisciplinary collaboration to work effectively with experts in law, policy, psychology and ethics, recognising that the responsible use of AI requires perspectives that extend beyond purely technical domains.
- Communication and trust-building to explain AI-enabled decisions with clarity and nuance, sustaining trust among all stakeholders involved.
AI-driven automation of repetitive or lower-value tasks has raised urgent questions about workforce transformation and potential job displacement. However, while some analyst tasks may diminish, new demands arise around supervising AI outputs, validating findings, and addressing AI-specific risks such as hallucinations, insecure code generation, or adversarial manipulation. Rather than eliminating cybersecurity roles, AI shifts the skill profile towards oversight, governance, and human-AI collaboration.
Ethical and responsible use of AI in cyber defence
By optimising time, efficiency and resources, AI enables defenders to do more with less, lowering barriers to entry and strengthening the ability to detect and respond to increasingly complex cyber threats. As cyber incidents grow in scale and sophistication, AI’s ability to process large amounts of data makes it indispensable. However, over-reliance on AI outputs introduces new vulnerabilities, particularly when those outputs are inaccurate or lack contextual understanding, raising several questions about ethics and responsible use:
Key Ethical Concerns
- Key Principles: Fairness
- Key Principles: Privacy and Data Protection
- Key Principles: Transparency and Explainability
- Key Principles: Transparency and Explainability
Regulatory measures
Technical solutions
Survey methods and data
Virtual Routes conducted an online survey of 27 participating educators from universities across Europe. The questionnaire aimed to determine whether they currently use AI in their cybersecurity teaching, how they use it, their reasons for doing so, the specific tools and tasks involved, and whether they apply AI in the context of cybersecurity support provided to local community organisations (LCOs). Although not statistically significant, the responses provide insight into current practices and expectations, highlighting both the opportunities and challenges associated with integrating AI into cybersecurity education. The survey was supplemented by follow-up interviews to reach a better understanding of practical use cases.
A few key takeaways can be highlighted:
Of the 27 respondents, most (22) said they were already experimenting with AI tools in their teaching, particularly in cybersecurity seminars organised by Google.org. However, adoption is still in its early stages and is often limited to specific tasks rather than systematic integration. Five respondents indicated that they were not yet using AI.
The most common applications involve general writing and data collection/analysis tasks (using common LLMs), data synthesis, and specialised cybersecurity tasks such as anomaly detection, attack surface mapping, malware analysis, and hands-on labs.
Cybersecurity educators primarily use AI to help students prepare for the workplace, to support self-directed learning and self-assessment, and to save time on pedagogical tasks such as exercise creation, grading, and content generation. Many also see the value of using AI to illustrate key cybersecurity concepts and scenarios.
Approximately half of respondents (15 out of 27) stated that they were already using or planned to use AI to provide cybersecurity support to local community organisations (LCOs), a key aspect of the Google.org Cybersecurity Seminars. This demonstrates a growing link between the exploration of AI tools in the classroom and their application in real-world community contexts.
Several respondents expressed interest in receiving guidance and examples on how to effectively integrate AI into cybersecurity education. They highlighted the need for best practices, shared resources, and case studies to move from experimentation to more structured and effective use of AI in teaching and service delivery.
How to Navigate the Toolkit
This toolkit is organised into two parts, reflecting the dual role of AI in cybersecurity.
AI in cyber defence
How AI changes cyber defence across the cyber incident lifecycle:
The first part examines the impact of AI on cyber defence, outlining how AI tools support cyber defense across the incident lifecycle of prevention, preparedness, response, and recovery. It highlights concrete applications such as attack surface mapping, anomaly detection, and secure code development, and illustrates these with case studies and references for further study.
AI in cyber offence
How AI changes the Cyber Kill Chain:
The second part addresses the impact of AI on cyber offence, focusing on how AI reshapes the Cyber Kill Chain. It considers how AI enables attackers to automate and enhance stages such as reconnaissance, weaponisation, and delivery, while also introducing novel forms of attack. Case examples and further readings provide context for understanding these developments.
Glossary of terms
- Adversarial AI: a set of techniques where attackers manipulate AI models (e.g., bypassing detection systems, poisoning training data).
- Adversarial attacks: data modified to deceive AI systems (e.g., slightly altered malware to evade AI-based antivirus software).
- Adversarial examples: malicious data designed to mislead AI models (e.g., distorted images or text).
- Alignment: ensuring that the objectives of AI systems are consistent with human intentions.
- Artificial intelligence (AI): a field of computer science that aims to create systems capable of performing tasks that require human intelligence, such as threat detection, abnormal behaviour analysis, or automated response to cybersecurity incidents.
- AI explainability (XAI): methods that make AI decisions understandable to humans.
- AI safety: ensuring that AI systems behave as expected and do not introduce new vulnerabilities.
- Bias in AI: systemic errors in AI results caused by unbalanced or biased training data.
- Data poisoning: a type of adversarial attack where malicious or corrupted data is inserted into training datasets to degrade model performance or introduce vulnerabilities.
- Deep learning (DL):a type of machine learning that uses multi-layer neural networks to analyse complex data (e.g., images, network logs), often used for malware classification or intrusion detection. Neural networks help process raw data at the heart of DL algorithms, helping to identify, classify and improve hidden correlations and patterns in raw data (neural networks include artificial neural networks, convolutional neural networks and recurrent neural networks depending on applications).
- Distribution shift: risk that AI models become ineffective if real-world data differs from training data.
- Ethical AI: principles ensuring that AI is used fairly, responsibly and transparently in cybersecurity.
- Fine-tuning: the process of adapting a pre-trained model to a specific task or dataset, often requiring less data and computational resources than training from scratch.
- Machine learning (ML): a subset of AI in which systems learn from data to identify patterns (e.g., malware signatures, suspicious network traffic) and make decisions or predictions.
- Model: a mathematical representation of a system trained on data to perform tasks such as classification, prediction, or generation in AI and machine learning.
- Model weights: numerical parameters learned during training that determine how an AI model processes inputs to produce outputs. Adjusting weights allows the model to recognise patterns and make accurate predictions.
- Prompt injection: a technique used to manipulate large language models by inserting crafted instructions into inputs, causing the model to ignore or override its original task and produce unintended outputs.
- Reinforcement learning: a method in which AI learns to make optimal decisions through trial and error in simulated environments.
- Robustness: the ability of AI to function reliably under changing conditions (for example when faced with new adversaries).
- Supervised learning: a method that involves training models using labelled data for classification.
- Synthetic data: artificially generated data used to train AI models when real-world data is scarce or sensitive.
- Unsupervised learning: a method for identifying hidden patterns in unlabelled data.
How You Can Contribute
Are you using an open-source AI solution to train students on cybersecurity, or do you have other publicly available resources to share for teaching about AI and cybersecurity (whether AI-based or not)?
We’d love to hear from you. Please email us at
co*****@vi************.org
, we’ll share your contributions with the wider community and ensure this toolkit is kept up to date.