AI over Lunch: Daniel Drewer

It is just after noon at Europol headquarters, and the canteen is already full. Little has changed since I was an intern here five years ago, the same mix of everyday office life and the seriousness of the work being done. I pick up a plat du jour, a chicken curry that looks better than one might expect from institutional food. Today’s guest arrives with a sandwich and a small salad, very efficient: it suits him.

Daniel Drewer is Europol’s Data Protection Officer. He was also my supervisor back then, which turns this lunch into a neat loop in time. He summarises his role simply: “The Data Protection Officer is the assurance provider for compliance in all matters of personal-data processing.” It is a role, he adds, “that has quietly become far more complex with the arrival of AI.”

As we sit down, he traces the path that led him here, long before AI was a central policy topic. “I started as a lawyer in the area of confidentiality,” he says. Drewer served as secretary to the Security Committee, handling security clearances and setting up what became the Confidentiality Desk. “It was not about personal data at that time,” he recalls. “It was about technical and organisational measures for classified law-enforcement information.” With intelligence arriving from twenty-seven Member States, each with its own rules around secrecy, building common standards required as much coordination as technical expertise. “We had to build a joint understanding of how classified police information should be protected. The culture change was significant.”

Confidentiality and data protection were later merged in the Information Integrity Unit because, as he puts it, “actionable advice and assurance belonged together.” Legal changes later required an independent Data Protection Officer, but cooperation with the Security Committee continued, since “both rely on technical and organisational measures for the handling of police data, and both protect sensitive information of citizens.”

It was during this period that Drewer formed the view he still emphasises today: “You cannot do data protection without technical expertise.” His team reflects this reality. Of the nine people working with him, two focus almost entirely on technical matters. With the growth of AI, that balance has become essential. “The responsibilities of the DPO have expanded significantly with AI,” he notes, some issues take weeks just to understand. “Everyone expects the DPO to handle it all, but AI requires multidisciplinary work.”

Years of harmonising standards created a stable framework for data protection. Yet, AI arrives differently. “What we achieved over years, you do not achieve within six months,” he points out. Inside institutions, the pace of development is challenging long-established processes. That pace is also exposing gaps. “There appears an ‘assurance gap’ emerging in many places,” he explains. “AI Officers often focus on development and strategy, not on safeguards or assurance. If AI systems process personal data, the DPO must provide that assurance, but that requires expertise, time and resources that many institutions do not have.” He glances at his sandwich. “AI implementation is often seen as ‘just another technical tool’, but the resources needed to implement it safely are rarely considered.”

“AI has expanded what the DPO must look at, but it has not removed any of the other tasks. AI is important, but it cannot eclipse everything else.”

The challenge, he stresses, is far broader than Europol. AI is advancing faster than organisations can train and recruit. “Data Protection Officers already operated with limited resources and difficulties to obtain required technical knowledge,” he says. Demand for expertise is growing faster than institutions can absorb it. A survey from the European Union Agency for Law Enforcement Training (CEPOL) found that a vast majority of interviewed police officers across the EU felt they needed training on data protection and AI, yet such training remains scarce. “The technical development is fast,” he warns, “and the knowledge about the design of effective safeguards does not spread automatically.”

Meanwhile, none of the existing tasks have disappeared. “AI has expanded what the DPO must look at, but it has not removed any of the other tasks,” he adds. Within his team, the pressure is clear: “AI is important, but it cannot eclipse everything else.” 

Institutional roles add further uncertainty. As AI systems become embedded in day-to-day work, responsibility for oversight remains unclear. “The AI Act does not specify anything about the role of the DPO,” he notes, “and clear guidance on how DPOs should be involved is still pending.” Agencies are responding in different ways. “Some ask their DPOs to take the lead on AI governance; others rely on newly appointed AI Officers or technical teams,” he explains. “There is far less clarity here than in other governance functions.”

The result is a patchwork of approaches, shaped more by internal culture than by any shared framework. What concerns him most is not this ambiguity of roles, but the risk that essential tasks will fall through the cracks. “AI processing involving personal data still needs safeguards and assurance,” he argues. “That does not change because the technology is new.” For him, the principle is simple: if an AI tool touches personal data, the DPO must be involved. The difficulty is ensuring that organisations have the structure and expertise to make that involvement meaningful.

When the conversation turns to why AI matters so profoundly in law enforcement, Drewer answers without hesitation. The technology is not abstract here; its consequences are immediate. AI brings clear opportunities: faster document handling, better sense-checking and a way of navigating vast quantities of information without sinking staff time into routine tasks. “Used well, AI can make investigations more efficient,” he says. “It can support analytical work that would otherwise take days.”

But the risks, he notes, are equally significant. Bias, explainability and data provenance take on a different weight in policing contexts. “You need high-quality, representative data to avoid discriminatory outcomes,” he cautions, “but data-protection law still requires you to process only what is necessary. Balancing the two is not straightforward.” Opaque or probabilistic systems also raise the risk of function creep and over-reliance, where automated suggestions slowly harden into de facto decisions. “Human oversight is essential,” he says. “AI can support judgment, but it cannot replace it.”

“Our role as DPOs is to make sure that when these tools are used, they comply with the law and respect fundamental rights.”

Criminal activity is shifting too. AI is already used to produce fake websites, deepfakes and documents that are almost indistinguishable from the real thing at scale. “It becomes a race,” he observes, “to make better use of AI than the criminals do.”

For Drewer, this is where DPO’s role is central. “Our role as DPOs is to make sure that when these tools are used, they comply with the law and respect fundamental rights.” To him, this is precisely where data protection becomes more than a compliance exercise: it is the guardrail that ensures technological progress does not erode the rights at the heart of law enforcement. A careful assurance process, he says, is what allows agencies to use powerful tools without crossing legal or ethical boundaries. “The role of the DPO is to provide that assurance.”

As lunch ends, Drewer looks ahead with characteristic calm. He hopes that in five years AI will simply be part of ordinary institutional life, treated like every other technology. “We should not have hype for every new development,” he concludes. “If the focus stays on protecting citizens, you can manage whatever comes next.”

The AI over Lunch interview series is a project part of Virtual Routes’ AI-Cyber Research and Policy Hub. If you would like to sponsor this series, please reach out to

hu*@vi************.org











.

Have someone in mind we should interview? We’re happy to hear your suggestions!

Author

Apolline Rolland

Policy Researcher in Cyber and Emerging Technologies

Similar posts

AI over Lunch Aline Duchateau
AI-Cybersecurity Research and Policy Hub

AI over Lunch: Aline Duchateau

Over lunch in Brussels, former Federal Police ICT Director, Aline Duchateau, cuts through the AI hype, arguing that in policing, purpose, governance and ethics must come before powerful tools.
Major General Pierre Ciparisse
AI-Cybersecurity Research and Policy Hub

AI over Lunch: Pierre Ciparisse

In rainy Brussels, Major General Pierre Ciparisse, Cyber Force Commander at the Belgian Cyber Command, reflects on cyber and AI in modern defence, stressing legal oversight, human judgment, and Europe’s need for secure, independent capabilities as technological change accelerates.
Roberto Cascella
AI-Cybersecurity Research and Policy Hub

AI over Lunch: Roberto Cascella

This AI over Lunch unfolds at a quiet Le Tournant in Brussels’ Matonge, joined by Roberto Cascella, CTO of the European Cyber Security Organisation (ECSO).

Thank you for signing up to our newsletter!

Thank you! RSVP received for AI over Lunch: Daniel Drewer

AI over Lunch: Daniel Drewer

Loading...

Loading…