Welcome to AI over Lunch, our interview series exploring how leaders across sectors are grappling with the opportunities and risks of artificial intelligence. For this conversation, we sat down with Raegan MacDonald, a leading digital rights activist, to discuss hype vs. reality in today’s AI debates.
I arrive early at Lune Siamoise, a Thai restaurant in Ixelles, Brussels. The room glows with soft light and the sharp scent of lime leaves and curry. As I look for a quiet corner for this interview, a woman at a nearby table is holding forth at length about her gym membership, circling around a point that never quite arrives. “Peine perdue,” I think, retreating to a table at the back. Ironically, this feels much like today’s debates around AI: plenty of noise, little clarity.
Raegan MacDonald joins and takes the seat opposite to me. In the European Union (EU) tech policy bubble, she stands out by asking the essential question: what, and most importantly who, does technology actually serve? I first met Raegan through the Women at Privacy programme, where I had asked for her as my mentor. Originally from Canada but long based in Brussels, she has become a leading digital rights activist in the EU, with stints at various non-profits: European Digital Rights (EDRi), Access Now and Mozilla. She brings realism to debates that often swing from utopian promise to dystopian panic. Now at Aspiration, she focuses on amplifying voices too often excluded from digital policymaking, connecting them to the tools to weigh in on crucial debates through her Policy Leadership Initiative.

We start with matching orders: a sparkling ice tea and Lune Siamoise’s signature massaman kai, a flavourful curry with Muslim roots and Persian influences with potatoes, chicken, cashew nuts and tamarin. When the plates arrive, MacDonald grins: “I hate to say it like that, but it really does look authentic.” She is right: it is a comforting dish for a bold conversation. As we begin to eat, MacDonald makes clear she has little patience for Brussels’ AI debates. “We need to stop confusing technological progress with societal progress,” she says. “AI is certainly new. But does it bring something better? I’ve yet to see the evidence.” Having worked on digital policy for more than a decade, she has seen earlier promises collapse from the utopian rhetoric of the early internet to the unfulfilled pledges of social media. “Yes, the world got smaller,” she notes. “But we’re still living with disinformation, manipulation, people being harmed, and none of it has been properly addressed.”
What worries her most is not the novelty of AI, but the belief systems it carries with it. “People want to believe AI will save us or kill us, like in the sci-fi films we grew up with. But technology isn’t neutral: it replicates existing inequalities, and often entrenches them,” she says. That, she argues, has been the pattern of every wave of technological innovation: promises of liberation, followed by harms that are left unresolved. Today’s AI industry carries the same saviour complex. “There’s almost a cult-like belief that technology will rescue us, but more often than not, it ends up masking the pursuit of power, controlling markets, information, and ultimately people.”
“We’re at the brink of climate collapse, and yet AI is being sold as if it will save us, while accelerating the very extractive systems driving the crisis.”
That faith in AI-as-solution, MacDonald adds, isn’t just cultural: it drives the extraordinary money flows into the sector. Policymakers and corporate leaders alike are gripped by the fear of being left behind, pouring resources into AI projects with little scrutiny of what they deliver. I suggest it sometimes feels like a never-ending Tupperware convention: everyone buying in, afraid to miss out. She bursts out laughing, but the humour fades when MacDonald reminds me that AI, like Tupperwares, comes with extensive hidden costs: the plastic, the waste, the environmental footprint. Training large AI models, she points out, requires staggering amounts of energy and water, not to mention the rare earth minerals and hidden labour. “We’re at the brink of climate collapse, and yet AI is being sold as if it will save us, while accelerating the very extractive systems driving the crisis.”
In Brussels, the rush to embrace AI is reshaping policymaking itself. MacDonald points to the EU’s recent drive for “simplification,” which often actually means deregulation. “We spent years building protections for consumers, workers, and digital rights,” she says. “Now, in the name of competitiveness, many of those hard-won safeguards are being weakened or under-enforced. But competitiveness isn’t a strategy. Digital sovereignty isn’t an end. What matters is the impact on rights, and clarity is needed on the vision guiding this wave of digital expansion.”
The EU’s flagship AI Act, she argues, illustrates the problem. Unlike the GDPR, which was rights-based, the AI Act relies on a risk-based framework that leaves much to companies’ self-assessment. That distinction matters. “Regulating on the basis of rights creates a more future proof formula. Whereas dominant tech companies will always have an incentive to downplay the risks of their technologies, especially as they like to ‘move fast and break things’. ” she says. “The AI Act misses the mark on setting a strong standard for human centric AI. And in practice, even with more robust regulation, the people most vulnerable end up least protected.”
Her programme takes on another thorny question: ‘what does openness really mean, and who does it truly benefit?’ For some, it’s about open models and datasets; for others, it’s about open processes, transparent governance, or public scrutiny. Each definition carries different risks and benefits. “Openness is only valuable if it redistributes power,” MacDonald insists. “Otherwise it’s just rhetoric.” She pushes back on the idea that closed systems are safer. “Closed systems don’t mean fewer vulnerabilities; they just mean we don’t see them,” she says. Drawing on her time at Mozilla, she adds: “Openness takes more work, but it makes us more secure.” For her, openness is not only a technical property, but a social and political choice, and one that determines whether AI strengthens accountability or deepens opacity. “If openness is framed only as technical, we miss the social, equity and security dimensions,” she says. “But if we get it right, openness can strengthen accountability without undermining safety.”

Part of MacDonald’s motivation at Aspiration is to rebalance who gets a seat in these debates. Too often, she says, policymaking on AI is shaped by industry lobbyists and government officials, while the communities most affected are absent from the table. “If we don’t change who is in the room, we’ll keep reproducing the same harms,” she argues. That conviction led her to launch a dedicated leadership programme on AI Openness & Equity Policy. Why AI? Because, she says, the stakes are unusually high. “It concentrates power in new ways, and without a more resilient, united civil society to intervene, that power goes unchecked. Our programme is one small contribution among many parallel efforts to build bridges across digital rights, social and economic justice fields, and foster collective strategising.”
We finish with a shared mango sticky rice, the portion so modest we laugh at the sight of it. For me, that’s a relief: sweet rice usually recalls the gloopy riz au lait of my primary school canteen. Here, though, the flavour more than makes up for the size.
As the plate is cleared, I ask where she thinks we’ll be in five years. She has two hopes: a stronger, more diverse field linking digital rights with climate, labour and other movements; and a reality check on AI, with investment guided by evidence, public interest, and rights.
The AI over Lunch interview series is a project part of Virtual Routes’ AI-Cyber Research and Policy Hub. If you would like to sponsor this series, please reach out to
hu*@vi************.org
.
Have someone in mind we should interview? We’re happy to hear your suggestions!