AI in Cyber Offence

How AI changes the Cyber Kill Chain

Offensive cyber operations are deliberate actions conducted in cyberspace to infiltrate, disrupt, or destroy adversary systems in pursuit of strategic objectives. They are commonly framed through the Cyber Kill Chain, a framework originally developed by Lockheed Martin. The framework breaks down an attack into a structured sequence of phases, tracing an adversary’s progression from initial reconnaissance to the final actions taken to reach the objectives (e.g., data exfiltration or data destruction).

2

2

Weaponisation

Coupling exploit with backdoor into deliverable payload

Delivery

Delivering weaponised bundle to the victim via email, web, USB, etc.

3

3

4

4

Exploitation

Exploiting a vulnerability to execute code on victim’s system

Installation

Installing malware on the asset

5

5

6

6

Command & Control (C2)

Command channel for remote manipulation of victim

Actions on Objectives

With ‘Hands on Keyboard’ access, intruders accomplish their original goals

7

7

In recent years, offensive cyber operations have intensified in both volume and complexity. Global cyberattacks are not only increasing sharply but also diversifying in type: in 2022, 27% of global cyberattacks were extortion-based, 21% involved backdoors, and 17% ransomware. Artificial intelligence (AI) is playing a major role in this escalation and diversification, enabling new forms of attack such as deepfakes or swarm malware, while strengthening traditional vectors like phishing or vulnerability exploitation. According to the CFO Global Survey, a striking 85% of cybersecurity professionals attribute the rise in attacks to the weaponisation of generative AI. In Bengaluru, India, a state report confirmed this trend: by early 2025, 80% of phishing emails were AI-generated.

AI is transforming the Cyber Kill Chain itself, and it has the potential to supercharge every stage of offensive cyber campaigns. The speed and scale at which AI reshapes this chain has become a pressing national security concern.

This toolkit focuses specifically on AI as an attack enabler, exploring how it transforms the different stages of the Cyber Kill Chain.

Reconnaissance

The attacker gathers information about the target, such as employee details, emails, or system data, to plan their attack.

How AI changes reconnaissance:

AI automates and speeds up open-source intelligence gathering by processing large volumes of public data (social media, corporate sites, leaked records) and by extracting structured artefacts such as subdomains, likely IP ranges and employee profiles. It also lowers the skill barrier for targeted social engineering by producing concise victim profiles suitable for spear-phishing.
Extent of impact: High. Automated OSINT substantially reduces time and expertise required.

Case highlighted: ChatGPT as a reconnaissance assistant

In 2024, cybersecurity researcher Sheetal Tamara published a paper demonstrating how large language models such as ChatGPT can greatly accelerate the reconnaissance phase of an attack. Rather than spending hours writing scripts and manually collecting open-source intelligence, the researcher used a short series of conversational prompts, for example: “List all subdomains you can find for examplecompany.com,” “Summarise the company’s network topology based on publicly available information,” and “Identify what operating systems and services are most likely running on these servers.”

Within minutes, the model produced useful reconnaissance material, including:

  • a list of domains and subdomains associated with the target company
  • likely IP address ranges
  • notes on SSL/TLS configurations, potential open ports and common services
  • public employee information (from LinkedIn and press releases) that could be used for spear-phishing.

Where OSINT collection would normally require hours or days of manual work, the experiment reduced the task to a conversational workflow that demanded far less technical expertise. The study therefore underscores how generative models can lower the barrier to automated reconnaissance, with clear implications for defensive practice and threat modelling.

Further readings

Weaponisation

The attacker uses the information uncovered during reconnaissance to build or customise a malicious payload (e.g., malware or exploits) and exploit the target’s weaknesses.

How AI changes weaponisation:

AI streamlines the creation and tuning of malicious payloads by generating or modifying code and by testing variants against detection models. This can produce more discreet, adaptive and targeted payloads, including polymorphic variants that alter their appearance with each execution. Adversarial testing can be used to refine payloads prior to deployment.

Extent of impact: High. Automation accelerates and scales payload development.

Case highlighted: AI-generated malware dropper in the wild

In 2024, cybersecurity analysts identified a phishing campaign that initially appeared routine: a series of emails distributing a conventional malware payload. However, closer inspection of the dropper (i.e. the small programme responsible for installing and activating the primary malware) revealed an unusual feature.

The structure and syntax of the dropper indicated that it had been generated by a large language model rather than authored by a human programmer. Although it functioned as a simple wrapper, the AI-produced dropper was both polished and effective, demonstrating an ability to evade traditional detection methods. It successfully bypassed basic antivirus signatures and delivered the malware as intended.

This finding was notable as one of the first confirmed instances of AI-generated malicious code being deployed in the wild. While the underlying malware was not novel, the outsourcing of part of the weaponisation process to AI marked a significant development. It demonstrated how attackers could scale operations, reduce development costs, and adapt more quickly, while simultaneously complicating detection and response efforts.

Further readings

Delivery

The attacker launches the attack by transmitting the malicious payload to the target, often via phishing emails, fake websites, or insecure networks.

How AI changes delivery:

AI tailors and times delivery mechanisms to maximise success. It automates the generation of convincing phishing content, real-time deepfakes, adaptive chat interactions and realistic fraudulent web pages, and it uses reconnaissance data to choose the optimal moment and channel for delivery. This reduces the need for human skill in executing campaigns.

Extent of impact: High. AI markedly increases the persuasiveness and automation of delivery.

Case highlighted: Deepfake CEO scam at Arup

In 2024, staff at the UK engineering firm Arup received what appeared to be a legitimate video call from their regional Chief Executive Officer. The executive urgently requested the transfer of funds in connection with a confidential transaction. The individual on screen replicated the CEO’s appearance, voice, and mannerisms with remarkable accuracy.

In reality, the caller was not the executive but a deepfake generated through AI, designed to imitate him in real time. Convinced of the authenticity of the interaction, staff authorised a sequence of transfers amounting to nearly 25 million US dollars.

This incident stands as one of the largest reported cases of AI-enabled social engineering during the delivery phase of a cyberattack. It illustrates that phishing need no longer depend on poorly crafted emails or dubious links. Instead, AI now enables the deployment of highly realistic audio and video impersonations that circumvent not only technical controls but also human judgement and trust.

Further readings

Exploitation

The attacker triggers the payload to exploit a vulnerability and gain unauthorised access to the target system. After infiltrating the organisation, the attacker uses this access to move laterally between systems to find relevant information (e.g., sensitive data, additional vulnerabilities, email servers etc) and harm the organisation.

How AI changes exploitation:

AI assists attackers in identifying, understanding and exploiting system weaknesses by automating vulnerability discovery (for example, intelligent fuzzing and guided scanning), constructing attack trees and proposing exploitation paths. It can also generate adversarial inputs that bypass security tools or exploit defences.

Extent of impact: Medium. AI improves discovery speed and effectiveness, especially against complex systems.

Case highlighted: The Morris II AI worm

In 2024, researchers demonstrated a novel form of self-propagating worm that did not rely on exploiting conventional software vulnerabilities. Instead, it targeted generative AI systems themselves.

Named Morris II in reference to the notorious 1988 Morris Worm, this proof-of-concept attack employed adversarial prompts to manipulate AI models into reproducing and distributing malicious instructions. Once a system was “infected”, the worm could autonomously generate further prompts that induced the AI to replicate the attack and transmit it to other models.

Unlike traditional worms, which typically exploit unpatched code, Morris II spread by exploiting the openness and unpredictability of generative AI behaviour. The demonstration underscored that as organisations increasingly embed generative AI into operational workflows, they may expose novel attack surfaces where the vulnerability lies not in source code but in training data and model responses.

Further readings

Installation

The attacker installs malware or backdoors to maintain (hidden) persistent access and control inside the target system.

How AI changes installation:

AI can produce adaptive persistence techniques and suggest the most effective installation vectors by analysing prior stages’ data, but full automation of the nuanced, decision-heavy installation phase remains limited. Where applied, AI enables malware to modify behaviour to avoid detection and to select optimal timing and entry points.

Extent of impact: Medium. AI improves persistence and stealth, but full automation remains limited because installation demands contextual decisions.

Case highlighted: Ransomware that learns to hide

In 2024, researchers introduced a system known as EGAN, an AI model developed to explore how ransomware might employ learning strategies to evade detection. Unlike traditional static malware, which is either identified or overlooked, EGAN operated through iterative experimentation.

The system repeatedly modified the ransomware code, testing successive variants until it produced one that could bypass antivirus defences while retaining full functionality. In effect, the malware “learned” how to circumvent anomaly-based detection mechanisms that are normally effective at identifying suspicious behaviour.

Although created within a research environment, EGAN demonstrated how AI-driven persistence mechanisms could render ransomware significantly more difficult to detect and eradicate once deployed. Rather than depending on predefined evasion techniques, the malware adapted dynamically, raising the prospect of near-“unkillable” malicious software.

Further readings

Command and control

After gaining control of multiple systems, the attacker creates a control center to exploit them remotely. The attacker establishes remote communication with the compromised system, via different channels (e.g., web, DNS, or email) to control operations and evade detection. The attacker uses different techniques such as obfuscation to cover their tracks and avoid detection, or denial-of-service (DoS) attacks to distract security professionals from their true objectives.

How AI changes command and control (C2):

AI enables more covert C2 communications by generating traffic that mimics legitimate activity, designing evasive domain-generation algorithms and orchestrating decentralised, adaptive botnets. It can also tune C2 behaviour to evade anomaly detectors.

Extent of impact: Medium. AI increases C2 sophistication and resilience, but operational constraints limit widespread adoption.

Case highlighted: AI-coordinated botnets, swarms with a mind of their own

In 2023, researchers demonstrated a novel form of botnet powered by AI. Conventional botnets typically rely on a central command-and-control (C2) server through which a single hub issues instructions that compromised machines, or “bots”, then execute. This architecture, however, can often be disrupted once defenders identify and disable the central server.

The AI-enabled botnet adopted a different model. Each node in the network employed reinforcement learning to autonomously determine when to initiate attacks, which targets to pursue, and how to adapt tactics in response to defensive measures. Rather than awaiting centralised instructions, the bots collaborated in a decentralised manner, functioning as a form of self-organising hive.

This design rendered the botnet more resilient and more difficult to detect. Even if some nodes were neutralised, the remainder could adapt and continue operating. For defenders, the task was no longer limited to disrupting a single server but instead required countering a distributed, adaptive swarm of compromised machines.

Further readings

Action on objectives

The attacker executes their ultimate goal, such as data exfiltration, data encryption or data destruction.

How AI changes action on objectives:

AI accelerates and refines the final tasks of an attack: automated data exfiltration, prioritisation of high-value assets, tailored extortion messaging and large-scale content generation for disinformation or disruption. Final strategic decisions often still require human judgement, but AI shortens the path to those decisions.

Extent of impact: Medium. AI expedites and scales objective-oriented activity but does not wholly replace human intent.

Case highlighted: PromptLocker, an AI-driven ransomware orchestration

In 2024, researchers at New York University introduced PromptLocker, a proof-of-concept ransomware system controlled by a large language model. Unlike conventional ransomware, which follows predefined behaviours, PromptLocker made decisions in real time and automated multiple stages of the attack lifecycle. In the demonstration the model autonomously:

  • selected the most valuable targets within a compromised system,
  • exfiltrated sensitive data prior to encryption, increasing leverage over victims,
  • encrypted volumes and files to deny access
  • generated tailored ransom notes, adjusting tone and demands to the victim’s profile (for example, financial capacity and sector).

Although the work was carried out in a controlled research environment, PromptLocker illustrated how generative AI can automate and scale tasks that previously required human planning, thereby accelerating attackers’ ability to achieve their objectives and adapt to changing circumstances.

Further readings

Discussion Questions

Bibliography

‘A Pro-Russia Disinformation Campaign Is Using Free AI Tools to Fuel a “Content Explosion” | WIRED’. Accessed 19 September 2025. https://www.wired.com/story/pro-russia-disinformation-campaign-free-ai-tools/. 

‘AI-Powered PromptLocker Ransomware Is Just an NYU Research Project — the Code Worked as a Typical Ransomware, Selecting Targets, Exfiltrating Selected Data and Encrypting Volumes | Tom’s Hardware’. Accessed 19 September 2025. https://www.tomshardware.com/tech-industry/cyber-security/ai-powered-promptlocker-ransomware-is-just-an-nyu-research-project-the-code-worked-as-a-typical-ransomware-selecting-targets-exfiltrating-selected-data-and-encrypting-volumes. 

Al-Karaki, Jamal, Muhammad Al-Zafar Khan, and Marwan Omar. ‘Exploring LLMs for Malware Detection: Review, Framework Design, and Countermeasure Approaches’. arXiv:2409.07587. Version 1. Preprint, arXiv, 11 September 2024. https://doi.org/10.48550/arXiv.2409.07587. 

Anderson, Hyrum S., Anant Kharkar, Bobby Filar, David Evans, and Phil Roth. ‘Learning to Evade Static PE Machine Learning Malware Models via Reinforcement Learning’. arXiv:1801.08917. Preprint, arXiv, 30 January 2018. https://doi.org/10.48550/arXiv.1801.08917. 

Anis, Fatima, and Mohammad Hammoudeh. ‘Weaponizing AI in Cyberattacks A Comparative Study of AI Powered Tools for Offensive Security’. Proceedings of the 8th International Conference on Future Networks & Distributed Systems, ACM, 11 December 2024, 283–90. https://doi.org/10.1145/3726122.3726164. 

Buchanan, Ben. ‘A National Security Research Agenda for Cybersecurity and Artificial Intelligence’. Center for Security and Emerging Technology, 2020. https://cset.georgetown.edu/publication/a-national-security-research-agenda-for-cybersecurity-and-artificial-intelligence/. 

Cohen, Stav, Ron Bitton, and Ben Nassi. ‘Here Comes The AI Worm: Unleashing Zero-Click Worms That Target GenAI-Powered Applications’. arXiv:2403.02817. Version 1. Preprint, arXiv, 5 March 2024. https://doi.org/10.48550/arXiv.2403.02817. 

Commey, Daniel, Benjamin Appiah, Bill K. Frimpong, Isaac Osei, Ebenezer N. A. Hammond, and Garth V. Crosby. ‘EGAN: Evolutional GAN for Ransomware Evasion’. 2023 IEEE 48th Conference on Local Computer Networks (LCN), 2 October 2023, 1–9. https://doi.org/10.1109/LCN58197.2023.10223320. 

‘Cyber Kill Chain® | Lockheed Martin’. Accessed 19 September 2025. https://www.lockheedmartin.com/en-us/capabilities/cyber/cyber-kill-chain.html. 

‘Cyber Threats in the EU: Facts and Figures – Consilium’. Accessed 22 September 2025. https://www.consilium.europa.eu/en/policies/top-cyber-threats/. 

‘Deepfake Fraudsters Impersonate FTSE Chief Executives’. Accessed 19 September 2025. https://www.thetimes.com/business-money/technology/article/deepfake-fraudsters-impersonate-ftse-chief-executives-z9vvnz93l. 

Hack The Box. ‘5 Anti-Forensics Techniques to Trick Investigators (+ Examples & Detection Tips)’. Accessed 19 September 2025. https://www.hackthebox.com/blog/anti-forensics-techniques. 

‘Hackers Are Using AI to Dissect Threat Intelligence Reports and “Vibe Code” Malware | IT Pro’. Accessed 19 September 2025. https://www.itpro.com/security/hackers-are-using-ai-to-dissect-threat-intelligence-reports-and-vibe-code-malware. 

Hoover, Amanda. ‘The Clever New Scam Your Bank Can’t Stop’. Business Insider. Accessed 19 September 2025. https://www.businessinsider.com/bank-account-scam-deepfakes-ai-voice-generator-crime-fraud-2025-5. 

Huynh, Nam, and Beiyu Lin. ‘Large Language Models for Code Generation: A Comprehensive Survey of Challenges, Techniques, Evaluation, and Applications’. arXiv:2503.01245. Preprint, arXiv, 2 April 2025. https://doi.org/10.48550/arXiv.2503.01245. 

IBM Security X-Force Threat Intelligence Index 2023. n.d. 

Kolosnjaji, Bojan, Ambra Demontis, Battista Biggio, et al. ‘Adversarial Malware Binaries: Evading Deep Learning for Malware Detection in Executables’. arXiv:1803.04173. Preprint, arXiv, 12 March 2018. https://doi.org/10.48550/arXiv.1803.04173. 

Li, Haoyuan, Hao Jiang, Tao Jin, et al. ‘DATE: Domain Adaptive Product Seeker for E-Commerce’. arXiv:2304.03669. Preprint, arXiv, 7 April 2023. https://doi.org/10.48550/arXiv.2304.03669. 

Mirsky, Yisroel, Ambra Demontis, Jaidip Kotak, et al. ‘The Threat of Offensive AI to Organizations’. Computers & Security 124 (January 2023): 103006. https://doi.org/10.1016/j.cose.2022.103006. 

Piplai, Aritran, Sai Sree Laya Chukkapalli, and Anupam Joshi. ‘NAttack! Adversarial Attacks to Bypass a GAN Based Classifier Trained to Detect Network Intrusion’. 2020 IEEE 6th Intl Conference on Big Data Security on Cloud (BigDataSecurity), IEEE Intl Conference on High Performance and Smart Computing, (HPSC) and IEEE Intl Conference on Intelligent Data and Security (IDS), May 2020, 49–54. https://doi.org/10.1109/BigDataSecurity-HPSC-IDS49724.2020.00020. 

‘Polymorphic AI Malware: A Real-World POC and Detection Walkthrough – CardinalOps’. Accessed 19 September 2025. https://cardinalops.com/blog/polymorphic-ai-malware-detection/. 

Rid, Thomas. ‘The Lies Russia Tells Itself’. Foreign Affairs, 30 September 2024. https://www.foreignaffairs.com/united-states/lies-russia-tells-itself. 

‘Rs 938 Crore Lost to Cybercrooks since Jan | Bengaluru News – Times of India’. Accessed 22 September 2025. https://timesofindia.indiatimes.com/city/bengaluru/rs-938-crore-lost-to-cybercrooks-since-jan/articleshow/122075324.cms. 

Schröer, Saskia Laura, Luca Pajola, Alberto Castagnaro, Giovanni Apruzzese, and Mauro Conti. ‘Exploiting AI for Attacks: On the Interplay between Adversarial AI and Offensive AI’. arXiv:2506.12519. Version 1. Preprint, arXiv, 14 June 2025. https://doi.org/10.48550/arXiv.2506.12519. 

Sewak, Mohit, Sanjay K. Sahay, and Hemant Rathore. ‘ADVERSARIALuscator: An Adversarial-DRL Based Obfuscator and Metamorphic Malware SwarmGenerator’. 2021 International Joint Conference on Neural Networks (IJCNN), 18 July 2021, 1–9. https://doi.org/10.1109/IJCNN52387.2021.9534016. 

Temara, Sheetal. ‘Maximizing Penetration Testing Success with Effective Reconnaissance Techniques Using ChatGPT’. arXiv:2307.06391. Preprint, arXiv, 20 March 2023. https://doi.org/10.48550/arXiv.2307.06391. 

Townsend, Kevin. ‘AI-Generated Malware Found in the Wild’. SecurityWeek, 24 September 2024. https://www.securityweek.com/ai-generated-malware-found-in-the-wild/. 

Yamin, Muhammad Mudassar, Mohib Ullah, Habib Ullah, and Basel Katt. ‘Weaponized AI for Cyber Attacks’. Journal of Information Security and Applications 57 (March 2021): 102722. https://doi.org/10.1016/j.jisa.2020.102722. 

Yang, Kai-Cheng, Danishjeet Singh, and Filippo Menczer. ‘Characteristics and Prevalence of Fake Social Media Profiles with AI-Generated Faces’. Journal of Online Trust and Safety 2, no. 4 (2024). https://doi.org/10.54501/jots.v2i4.197. 

Yu, Jingru, Yi Yu, Xuhong Wang, et al. ‘The Shadow of Fraud: The Emerging Danger of AI-Powered Social Engineering and Its Possible Cure’. arXiv:2407.15912. Preprint, arXiv, 22 July 2024. https://doi.org/10.48550/arXiv.2407.15912. 

Thank you for signing up to our newsletter!

Thank you! RSVP received for AI in cyber offence

AI in cyber offence

Loading...

Loading…