By visiting our site, you agree to our privacy policy regarding cookies, tracking statistics, etc.
On December 10th, Virtual Routes Co-Director James Shires joined a panel of experts at the AI Summit at Black Hat Europe on "Navigating Standards, Regulations, and Risk Management in AI for Cybersecurity." His remarks focused on the evolving relationship between artificial intelligence and cybersecurity, and implications for governance, regulation, and education.
Dr Shires began by noting that AI is a broad and rapidly emerging topic, and as it evolves, its interactions with cybersecurity become increasingly complex on both a defensive and offensive level. AI-enabled cyberattacks are among the highest-risk use cases cited by many concerned about the negative impact of AI on society. At the same time, AI itself is vulnerable to cybersecurity threats, emphasizing the need to address weaknesses and vulnerabilities within AI systems and cybersecurity as a combined effort.
Dr Shires went on to note that local communities, NGOs and students face a significant power imbalance when integrating AI into their activities. These communities often rely heavily on external providers for their security needs, meaning they lack the resources needed to seamlessly integrate AI into organizational tasks - including cybersecurity. Companies developing and using AI for cybersecurity should explore ways to bolster popular cybersecurity initiatives, such as Capture the Flag (CTF) challenges, with accessible AI tools, as part of a broader pivot towards AI-focused cybersecurity education.
While fostering accessible AI-enhanced cybersecurity is critical for civil society, it takes place within a rapidly shifting global regulatory context. AI regulation is competitive at multiple levels: between regions, particularly the United States, the European Union, and China; between countries (for example, national AI regulatory agencies); and between international standardization bodies. It is not clear to smaller or under-resourced organizations which of the hundreds of emerging AI standards are most important, or even where there is overlap or compatibility between different standards. This challenge is especially acute for those outside existing centres of global regulation - for example, in the Middle East and Africa - where new standards and regulatory regimes are emerging.
Dr Shires concluded by noting that, to adopt AI successfully, organizations and communities at every level must ask themselves key questions centred less on technical details and more on understanding of the impact, value, and risk involved in adoption. For example, organization leaders (not just IT staff) should ask themselves: do you know what the technologies you use are designed to do, how they relate to other technologies and people in your organization, and what would you do if they go wrong?
Ultimately, successful integration of AI into cyber defence requires investing in students and empowering them to work with AI through a combination of principle-based and practical education. Principle-based, because the foundational principles guiding AI regulation and integration are less likely to change as rapidly as individual technologies, and practical, because hands-on experience is the only way to truly learn and understand technologies in action. Through the Google.org Cybersecurity Seminar program, Virtual Routes contributes to the development of the skills and competencies needed for the cybersecurity industry of the future, including a focus on AI.
Načítání…