AI over Lunch: Colin Shea-Blymyer

This week’s AI over Lunch comes from Washington DC, where I meet Colin Shea-Blymyer, researcher at the Centre for Security and Emerging Technology (CSET), to talk about AI beyond the European bubble. We meet at the Dubliner, an Irish pub and Capitol Hill institution. Having just left Europe, I couldn’t stray too far from the continent. Our orders go in with an American efficiency that would make the Belgian administration blush: fish and chips for me; Irish beef stew for him. 

As we start eating, Shea-Blymyer tells me about his background. He has what feels like a rare double fluency: a computer scientist who crossed into policy research, determined to turn technical insight into practical, no-nonsense recommendations.

At Georgetown, where he teaches AI governance and national policy, that mix of worlds comes through clearly. “Most of my students come from policy rather than technical backgrounds,” he explains. “I want them to have historical context. For decades, AI was not learning-based, it relied on expert systems, databases of facts and rules arranged into useful patterns. Neural networks actually predate modern computers, but they fell out of favour for a mix of technical limits, funding priorities and academic politics.” He adds, “Understanding that history helps us avoid repeating the so-called ‘AI winter’.” He tells his students that AI governance cannot exist in a vacuum: technologists need to understand the implications of their work, while policymakers must engage with technical concepts to form sound judgments. 

That instinct to connect ideas with practice shapes how he approaches research, especially when theory meets the messy realities of implementation. Before joining CSET, Shea-Blymyer spent time at MITRE, supporting the National Institute of Standards and Technology (NIST) and its National Cybersecurity Center of Excellence (NCCoE) on adversarial machine-learning research, as the agency was tasked with developing standards for AI. “I spent months reading documents from around the world about what trustworthy AI should look like,” he recalls. “And as a machine-learning student, I remember thinking, ‘what does that even mean?’” It was one of his first close looks at how ambitious ideas about AI ethics and trustworthiness can buckle under technical constraints.

He noticed how easily good intentions collapse when they meet technical reality. “I began thinking about two linked problems,” he continues. “First, the technical question: what does it mean to make AI systems more trustworthy? And second, the policy question: how do you write policy that is actually actionable by technical teams?”

“A lot of people still think standards and regulation stifle innovation but history shows the opposite. Standards are the rules of the road; without them, technologies cannot interoperate or even exist at scale.”

I ask how the American approach to tech policy differs from the European one. “A lot of people still think standards and regulation stifle innovation,” he says, “but history shows the opposite. Standards are the rules of the road; without them, technologies cannot interoperate or even exist at scale.” He contrasts DC’s instinct for flexibility and Brussels’ preference for detailed, often pre-emptive legislation. “In the U.S., the focus is more on encouraging innovation first, then using standards and certifications to fill the gaps.” The approach, he notes, allows experimentation but also leaves much to voluntary and non-binding initiatives from industry. Between bites of stew, he concedes that this flexibility is both a blessing and a curse: it reduces bureaucracy, but makes progress uneven. “Regulation is not the enemy of innovation,” he insists. “Often, it is what makes innovation possible.” He points to the internet’s own history. “We forget that so much of what we take for granted today, the protocols that let computers talk to each other, were the result of standard-setting, not market chaos.”

As our plates begin to empty, the conversation turns to another side of Shea-Blymyer’s work: cybersecurity. In his view, it is still too often kept apart from mainstream AI policy debates. “People talk about AI safety and AI security as if they were the same thing,” he notes. “They are not. AI safety is about what the system does; AI security is about what can be done to it.” He draws a simple distinction: AI for cyber versus cyber for AI. The first uses AI to strengthen defences, automate detection or patch vulnerabilities. The second poses a newer, trickier question: how secure are the models themselves? “It is surprisingly difficult to patch an AI system,” he explains. “You can fix a line of code; you cannot fix a set of model weights without changing everything else it has learned.” Does AI create new threats? “Not exactly,” he concedes, “AI does not invent new threats so much as magnify existing ones, giving attackers greater scale and accessibility.” Still, he cautions, AI systems bring their own technical vulnerabilities. “The systems themselves become targets,” he adds, “and defending them will require a very different kind of security thinking.”

“AI models are now critical infrastructure in their own right. That means we have to think about protecting them the same way we think about protecting networks, supply chains, or power grids.”

Tackling that challenge is central to his work in the CyberAI Project at CSET, which looks at how AI and cybersecurity intersect. “AI models are now critical infrastructure in their own right,” Shea-Blymyer says. “That means we have to think about protecting them the same way we think about protecting networks, supply chains, or power grids.” One recent line of research within the project examines the security of open-source models and the difficulty of protecting model weights –  “the real intellectual property” of AI developers, as he puts it. Open access, he notes, accelerates innovation but raises questions about intellectual property and competitive advantage: “If anyone can retrain or fine-tune a model’s weights, you can also exfiltrate data or replicate capabilities you did not build.”

He argues that resilience is a blind spot in current debates. “AI policy often talks about fairness and bias, which are both important,” he says, “but very few people talk about resilience: what happens when a model fails, or is attacked, or simply behaves unpredictably.” In his view, AI governance depends as much on technical robustness as on ethics. “Security has to be part of safety,” he adds. “They are two sides of the same coin.” 

The next frontier of AI governance, he argues, will be ‘usable security’, giving operators and policymakers tools to understand when a model might be untrustworthy or fail under stress. “We need ways for users to know what their systems can and cannot handle without needing a PhD to interpret it.” 

“Security has to be part of safety. They are two sides of the same coin.” 

As the conversation winds down, we talk about the pace of technological change and how governments and researchers alike are struggling to keep up. “Policy will always lag behind technology,” he admits, “but that does not mean it cannot shape it. The goal is not to predict the next breakthrough, but to make sure the systems we build are trustworthy, secure and aligned with human values, whatever shape they take.”

As we finish our meal, I look down at the last few fries: not quite the twice-fried Belgian kind, but, like most things in Washington, they get the job done. Outside the Dubliner, the heavy DC September air feels far from Brussels. The contrast fits: Europe writes rules; America tests limits. Somewhere between the two lies the balance AI still needs: structure without stagnation, speed without recklessness.

The AI over Lunch interview series is a project part of Virtual Routes’ AI-Cyber Research and Policy Hub. If you would like to sponsor this series, please reach out to 

hu*@vi************.org











.

Have someone in mind we should interview? We’re happy to hear your suggestions!

Author

Apolline Rolland

Policy Researcher in Cyber and Emerging Technologies

Similar posts

AI over Lunch Aline Duchateau
AI-Cybersecurity Research and Policy Hub

AI over Lunch: Aline Duchateau

Over lunch in Brussels, former Federal Police ICT Director, Aline Duchateau, cuts through the AI hype, arguing that in policing, purpose, governance and ethics must come before powerful tools.
Major General Pierre Ciparisse
AI-Cybersecurity Research and Policy Hub

AI over Lunch: Pierre Ciparisse

In rainy Brussels, Major General Pierre Ciparisse, Cyber Force Commander at the Belgian Cyber Command, reflects on cyber and AI in modern defence, stressing legal oversight, human judgment, and Europe’s need for secure, independent capabilities as technological change accelerates.
Roberto Cascella
AI-Cybersecurity Research and Policy Hub

AI over Lunch: Roberto Cascella

This AI over Lunch unfolds at a quiet Le Tournant in Brussels’ Matonge, joined by Roberto Cascella, CTO of the European Cyber Security Organisation (ECSO).

Thank you for signing up to our newsletter!

Thank you! RSVP received for AI over Lunch: Colin Shea-Blymyer

AI over Lunch: Colin Shea-Blymyer

Loading...

Loading…