AI over Lunch: Tazin Khan

AI over Lunch: Tazin Khan

This week, the AI lunch lands in New York, where I meet Tazin Khan, founder of Cyber Collective, a non-profit organisation that uses education and community engagement to make cybersecurity, data protection and online privacy more accessible. We meet for a late lunch at Ayat, a Palestinian restaurant in Bushwick, Brooklyn. Manhattan always feels impossibly large to me – all steel, glass and vertical ambition –  but here the buildings shrink back to human scale. Bushwick wears change on its walls, its gentrification much like AI: full of promise, but never without loss.

We order a plate to share; a mix of grilled meats and rice with a side of sauce that turns out far milder than promised. Khan pulls a face as we flag down the waiter for more. “It’s never spicy enough,” she laughs. I have known her for nearly six years, and she has never been one to hold back. We first crossed paths online, when I came across a viral tweet of hers calling out a cyber media outlet for sexist coverage. At the time, I was an intern at Europol and, half intimidated, half inspired, invited her to speak at the Europol Data Protection Experts Network (EDEN) conference I was helping to organise, bringing a civil society perspective on privacy to a law enforcement audience.

“Cybersecurity should begin at the dinner table.”

What stands out about Khan is her mix of care and realism. Her approach is not to lecture people into caring about cybersecurity, but to equip them to see what is at stake and decide for themselves. Khan’s work is built on that idea of agency, and that security is not something done to people, but with them. Through the Cyber Collective, she has developed a community-based approach to digital safety that treats security as something lived rather than imposed. “Cybersecurity,” she believes, “should begin at the dinner table,” in ordinary conversations where people actually talk, share and learn. 

“AI is a tool that can actually make sharing knowledge much easier,” she says. Used well, it can become a learning companion. Khan describes how her team uses it to help people unpack what often feels inaccessible or opaque. For instance, “most of us just scroll and click ‘accept the terms and conditions’,” she says, “but if you copy and paste those same terms and conditions into a chatbot, and simply ask what you are agreeing to, it changes everything.” In her workshops, this exercise always sparks discussion. People begin to see how AI can act as a translator between legal jargon and everyday understanding, revealing not just what they consent to but what that consent means. “It’s the first time many people realise they can question technology instead of just using it,” she adds. For Khan, that moment of realisation- when a participant looks up and says “I didn’t know I could ask that” –  is what digital empowerment truly looks like.

“AI doesn’t fix what’s broken. It magnifies it.”

“AI doesn’t fix what’s broken,” she says, picking at the pile of meat in front of us. “It magnifies it.” For Khan, the problem was never just access to technology, but how people are taught to use and trust it. If individuals already struggle to navigate privacy settings or data-sharing agreements, the rise of generative tools has only made things more complicated. For her, digital literacy means more than learning to use a new tool: it is about understanding why we use it, what information we hand over, and what trade-offs we make along the way. “You can use AI to make learning easier,” she tells me, “but you also have to teach people what it means to trust a tool that learns from them.” Her goal is not to push people to either adopt or avoid AI, but rather to help them make informed choices about how and when to use it. “What matters is that you understand what you’re saying ‘yes’ to when using such tool,” she says while sipping on her hibiscus margatita, a sweet-sharp homemade tea. 

Khan sees that balance empowerment and dependence as the heart of the challenge ahead. She tells me that AI has transformed her work, especially for a small non-profit with limited resources. At Cyber Collective, it now supports almost every part of their operations: drafting learning materials, supporting the training of community leaders, and translating resources into multiple languages. “Without it, there’s so much I wouldn’t be able to do at the speed we need,” she says. “Budgets are tight, expectations are high, and AI-powered tools have become the bridge that keeps everything moving.” AI gives her the reach and speed of organisations with far greater resources, yet it also sharpens her awareness of new dependencies. “It finally puts me at the same level as people with teams and assistants,” she says, “but it also makes me think about what happens if it all goes dark, if the tools disappear.” The advantage comes with its own fragility.

“People expect the ones most harmed by systems to also be the ones to dismantle them. I can’t afford that purity. If I can use the tool to benefit my community, I will, even if it’s imperfect.”

That duality of dependence and resistance defines her stance. She rejects both the techno-utopian optimism and the moral panic that shape most conversations about AI. “Many things can exist at once,” she tells me. “AI can be extractive and empowering. What matters is who gets to decide how it’s used.” When I ask about criticism from those who see AI as irredeemably harmful, especially to the vulnerable communities she works with, she shrugs. “People expect the ones most harmed by systems to also be the ones to dismantle them. I can’t afford that purity. If I can use the tool to benefit my community, I will, even if it’s imperfect.” It is not defiance but persistence: survival turned into a strategy.

Our plate sits mostly empty, the last pieces of meat gone cold. It’s already time to go, who knows where will next cross paths. Through the window, Bushwick shifts under the late afternoon light, a neighbourhood still finding its balance. Like AI, it holds the possibility of positive change, if we choose to shape it well.

The AI over Lunch interview series is a project part of Virtual Routes’ AI-Cyber Research and Policy Hub. If you would like to sponsor this series, please reach out to

hu*@vi************.org











.

Have someone in mind we should interview? We’re happy to hear your suggestions!

Author

Apolline Rolland

Policy Researcher in Cyber and Emerging Technologies

Similar posts

AI over Lunch Aline Duchateau
AI-Cybersecurity Research and Policy Hub

AI over Lunch: Aline Duchateau

Over lunch in Brussels, former Federal Police ICT Director, Aline Duchateau, cuts through the AI hype, arguing that in policing, purpose, governance and ethics must come before powerful tools.
Major General Pierre Ciparisse
AI-Cybersecurity Research and Policy Hub

AI over Lunch: Pierre Ciparisse

In rainy Brussels, Major General Pierre Ciparisse, Cyber Force Commander at the Belgian Cyber Command, reflects on cyber and AI in modern defence, stressing legal oversight, human judgment, and Europe’s need for secure, independent capabilities as technological change accelerates.
Roberto Cascella
AI-Cybersecurity Research and Policy Hub

AI over Lunch: Roberto Cascella

This AI over Lunch unfolds at a quiet Le Tournant in Brussels’ Matonge, joined by Roberto Cascella, CTO of the European Cyber Security Organisation (ECSO).

Thank you for signing up to our newsletter!

Thank you! RSVP received for AI over Lunch: Tazin Khan

AI over Lunch: Tazin Khan

Loading...

Loading…