Loading…
Venue: Hall D (Level -2) clear filter
arrow_back View All Dates
Friday, June 26
 

9:00am CEST

Opening Remarks
Friday June 26, 2026 9:00am - 9:15am CEST
Welcome to the OWASP Global AppSec EU 2026 conference! We are excited you are with us, not only to attend this amazing event, but also to celebrate our 25th anniversary!

Don't miss the opening remarks for the event as we welcome you and provide a few key details to provide you with a roadmap to a successful time with us!
Friday June 26, 2026 9:00am - 9:15am CEST
Hall D (Level -2)
  Keynote

9:15am CEST

Keynote: We Live in the Future: The Death and Rebirth of Application Security
Friday June 26, 2026 9:15am - 10:00am CEST

Speakers
avatar for Gadi Evron

Gadi Evron

Founder and CEO, Knostic
Gadi Evron is Founder and CEO at Knostic, an AI agent security company, CISO-in-Residence for AI at CSA, and chairs the [un]prompted conference. Previously, he founded Cymmetria (acquired), was the Israeli National Digital Authority CISO, founded the Israeli CERT, and headed PwC's... Read More →
Friday June 26, 2026 9:15am - 10:00am CEST
Hall D (Level -2)
  Keynote

10:30am CEST

From ASVS to APVS: What Changes When You Treat Privacy as a System Property?
Friday June 26, 2026 10:30am - 11:15am CEST
Privacy is increasingly expected to be “built in by design”, yet most privacy guidance remains legal, abstract, or disconnected from how systems are actually designed and reviewed. As a result, privacy is still treated as a compliance exercise rather than an engineering discipline.

In this talk, we share early lessons from the OWASP Privacy Project and our work on the Application Privacy Verification Standard (APVS). Drawing on familiar AppSec concepts such as ASVS, threat modeling, and weakness classification, we explore what changes when privacy is treated as a system property rather than a checkbox.

We discuss where traditional security controls fall short, how privacy risks can exist without attackers or breaches, and how we are translating high-level privacy principles into actionable guidance for architects and developers. This is not a finished standard, but a candid look at what works, what doesn’t, and where practitioner feedback is essential as the project evolves.
Speakers
avatar for Matthew Coles

Matthew Coles

Product Security Architect/Technologist

Matthew Coles is a Product Security Architect and Technologist with 20+ years experience working with business leaders and developers to secure hardware and software systems and processes. He is a technical contributor to community standard initiatives such as OpenSSF and OWASP, a... Read More →
avatar for Kim Wuyts

Kim Wuyts

Manager Cyber & Privacy, PwC Belgium

Dr. Kim Wuyts is a leading privacy engineer with over 15 years of experience in security and privacy. Before joining PwC Belgium as Manager Cyber & Privacy, Kim was a senior researcher at KU Leuven where she led the development and extension of LINDDUN, a popular privacy threat modeling... Read More →
avatar for Avi Douglen

Avi Douglen

Software Security Consultant, Bounce Security
Avi Douglen is the founder and CEO at Bounce Security, a boutique consultancy specializing in software security, where he spends a lot of time with development teams of all sizes. He helps them integrate security methodologies and products into their development processes, and often... Read More →
Friday June 26, 2026 10:30am - 11:15am CEST
Hall D (Level -2)

11:30am CEST

Phishing for Passkeys - An Analysis of WebAuthn and CTAP
Friday June 26, 2026 11:30am - 12:15pm CEST
WebAuthn was supposed to replace passwords on the web: uniform, secure, manageable authentication for everyone! One of its unique selling points was supposed to be the impossibility of phishing attacks. When Passkeys were introduced, some of WebAuthn's security principles were watered down in order to achieve some usability improvements and thus reach more widespread adoption.

This presentation discusses the security of Passkeys against phishing attacks. It explains the possibilities for an attacker to gain access to accounts secured with Passkeys using spear phishing, and what conditions must be met for this to happen. It also practically demonstrates such an attack and discusses countermeasures.

Participants will learn which WebAuthn security principles still apply to Passkeys and which do not. They will learn why Passkeys are no longer completely phishing-proof and how they can evaluate this consideration for their own use of Passkeys.
Speakers
avatar for Michael Kuckuk

Michael Kuckuk

Fullstack Developer, inovex

As a fullstack software developer, Michael's main expertise lies in simple software development. But since he is well aware that the happy path is the easy part, he's always had an interest for security and he's always been very security- and privacy-aware in his work. He enjoys developing... Read More →
Friday June 26, 2026 11:30am - 12:15pm CEST
Hall D (Level -2)

1:15pm CEST

AI-Generated Code vs Human Code. Who Really Writes More Vulnerabilities
Friday June 26, 2026 1:15pm - 2:00pm CEST
When AI coding tools entered mainstream development, the application security community reacted fast and loudly. Many warned that AI would dramatically increase vulnerabilities. The most common argument was simple and intuitive. AI models were trained on vast amounts of real-world code, including insecure and vulnerable code. Garbage in, garbage out. If AI learned from vulnerable code, it would inevitably reproduce those vulnerabilities at scale.

This claim quickly became accepted wisdom, despite the fact that almost no one could actually prove it.

This session presents a data-driven examination of that assumption. By correlating reported security vulnerabilities with automated line-level code attribution, we were able to determine whether a vulnerability originated in AI-generated code or human-written code. This allowed us to move the discussion from fear and intuition to measurable evidence.

The results are more nuanced and more interesting than the prevailing narrative suggests. In some scenarios, AI-generated code showed higher vulnerability density. In others, it performed comparably to, or even better than, human-written code. The differences are not accidental. They correlate strongly with the model used, the tooling, and how developers interact with AI, rather than AI usage alone.

This talk challenges the notion that AI coding is inherently insecure. It replaces the garbage-in, garbage-out argument with concrete data, identifies where the real risks actually emerge, and explains what this means for modern AppSec strategy. Attendees will leave with evidence they can use to recalibrate policies, controls, and conversations around AI-assisted development, without slowing teams down or relying on assumptions.
Speakers
avatar for Eitan Worcel

Eitan Worcel

CEO & Co-Founder, Mobb

Eitan Worcel is the co-founder and CEO of Mobb. He has close to 20 years of experience in application security, spanning hands-on software development, product leadership, and executive roles. Throughout his career, Eitan has worked closely with engineering and security teams to understand... Read More →
Friday June 26, 2026 1:15pm - 2:00pm CEST
Hall D (Level -2)

2:15pm CEST

Teaching AI Agents Like Guide Dogs: A Progressive Trust Framework
Friday June 26, 2026 2:15pm - 3:00pm CEST
Your AI agent has access to your database, your APIs, and your users' data. But would you give a new hire admin credentials on day one? We do this with AI agents constantly - deploying them with full system access before they've proven they won't hallucinate a DROP TABLE or leak sensitive data to a prompt injection attack.

Guide dog training programs solved this problem decades ago. They take untested puppies and transform them into autonomous agents trusted to make life-or-death decisions - through a systematic process of graduated trust. A guide dog doesn't get to navigate traffic until it's mastered basic commands. It doesn't work unsupervised until it's proven reliable across thousands of scenarios. And critically, it's trained in "intelligent disobedience" - knowing when to refuse a direct command because following it would cause harm.

In this talk, I'll introduce the Progressive Trust Framework - a practical approach to AI agent deployment inspired by 90+ years of service animal training. You'll learn how to implement graduated permission systems where agents earn expanded access through demonstrated reliability. We'll explore the "3 D's" testing methodology (Distance, Duration, Distraction) for validating agent behaviour before promotion. And we'll tackle the hardest problem: training agents that refuse harmful requests without becoming unhelpfully paranoid.

Whether you're building autonomous coding assistants, customer service bots, or internal automation tools, you'll leave with concrete patterns for deploying AI agents that earn trust instead of demanding it. Because the question isn't whether your AI agent will make mistakes - it's whether you've built the guardrails to catch them before they hit production.
Speakers
BD

Bodhisattva Das

Security Engineer, RUDRA Cybersecurity

Bodhisattva Das is a Security Engineer at Rudra Cybersecurity, focused on securing non-human identities, AI agents, and automated workloads across cloud environments. He specialises in open-source threat detection using Wazuh, and builds practical solutions for identity governance... Read More →
Friday June 26, 2026 2:15pm - 3:00pm CEST
Hall D (Level -2)

3:30pm CEST

Why IAM Remains a Challenge and What We Can Do About It
Friday June 26, 2026 3:30pm - 4:15pm CEST
Everyone expects Identity & Access Management to be a "set it and forget it" problem. But the reality looks quite different: the same challenges keep resurfacing, they are technically demanding, time-consuming, and frequently create friction between teams, ultimately resulting in significant costs. And the rise of AI agents makes it even worse.

Over the years, I explored these recurring issues, which led to a multi part blog series (https://www.innoq.com/en/blog/2025/07/whats-wrong-with-the-current-owasp-microservice-security-cheat-sheet/) published in 2025, initially aimed at updating the OWASP Microservice Security Cheat Sheet. My goal was to show how well known IAM building blocks can be combined into pragmatic, coherent, and operationally realistic solutions. That work eventually grew beyond the original scope and is becoming multiple new OWASP Cheat Sheets plus an entirely new architectural-level cheat sheet format.

In this talk I'll share the essence of the patterns and the strategies I identified and documented, show how to avoid the usual traps, and how to reduce IAM complexity in distributed systems to create the space to focus on what we're actually building - the product.
Speakers
avatar for Dimitrij Drus

Dimitrij Drus

Senior Consultant, INNOQ

I work as a Senior Consultant at INNOQ Germany GmbH, focusing on security architecture and the design of secure distributed systems. With a strong passion for security, I regularly lead training sessions to help others address modern (web) security challenges.    de.linkedin.c... Read More →
Friday June 26, 2026 3:30pm - 4:15pm CEST
Hall D (Level -2)
 
Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.
Filtered by Date -