By visiting our site, you agree to our privacy policy regarding cookies, tracking statistics, etc.
On 13 June, ECCRI CIC Co-Director James Shires spoke at the AI Summit in London. His remarks, on a panel about improving enterprise cybersecurity, focused on the role of AI in transforming the cybersecurity workforce.
AI is increasingly integrated into threat intelligence and cyber defense. From the use of large language models (LLMs) to summarize and assess malicious code, to classification and pattern recognition algorithms in anomaly detection and TTP association, machine learning and AI technologies have been a steadily growing part of the cybersecurity toolkit for decades. But greater integration presents challenges for the relationship between cybersecurity professionals and their tools. Analysts must (re)learn best practices for dealing with AI-generated alerts, while also coping with increased stress and potentially misguided management pressure to adopt the latest security technologies at the expense of doing simple things right.
From an attacker’s perspective, the current crop of GPTs poses the most utility for social engineering, especially voice, video and text spoofing. This in turn facilitates a broader expansion of richly backfilled pseudonymic identities for phishing and other manipulation. In contrast, the brief trend in 2023 for malicious GPTs - WolfGPT, WormGPT, and FraudGPT to name a few - seems to have died away in favour of a more gradual approach to integrating LLMs into adversary activity. The primary use-case, at the moment, lies in less- or unskilled malicious actors looking to generate exploits quickly, although LLMs are just as unreliable for those without domain-specific knowledge in this field as they are in others.
Longer term, the automated development of malicious machine code - just like the automated discovery of viruses or other biological code - poses significant threats, but these have not yet materialized in the public domain. Recent research indicates that “teams'' of AI agents may perform reasonably well at vulnerability detection and exploitation, with clear malicious applications as well as efficiency benefits for penetration testing.
More widely, the integration of AI technologies into everyday organizational activities poses a broader range of security risks. Nearly all datasets used to train AI algorithms are vulnerable to poisoning, while public-facing models face privacy and security risks from malicious prompt injection and reconstruction of training data via inference attacks. The US and UK governments have released excellent guidance to help all organizations prepare themselves and defend against these risks.
In the cat-and-mouse world of cybersecurity, AI offers an edge for both attackers and defenders. To ensure defenders keep pace with new threats, AI should be incorporated into cybersecurity education and professional training from the very start. Practical AI-focused cybersecurity education, such as that provided by the Google.org European Cybersecurity Seminars program, will transform the future cybersecurity workforce not just to be AI-ready, but to use AI technologies seamlessly and effectively alongside the current cybersecurity toolbox.
From November 26, 2024, the European Cyber Conflict Research Initiative (ECCRI) and the European Cyber Conflict Research Incubator (ECCRI CIC) operate under one umbrella brand - Virtual Routes. Read more about Virtual Routes here.
جاري التحميل…