Loading…
Venue: Hall D (Level -2) clear filter
Thursday, June 25
 

9:00am CEST

Opening Remarks
Thursday June 25, 2026 9:00am - 9:15am CEST
Welcome to the OWASP Global AppSec EU 2026 conference! We are excited you are with us, not only to attend this amazing event, but also to celebrate our 25th anniversary!

Don't miss the opening remarks for the event as we welcome you and provide a few key details to provide you with a roadmap to a successful time with us!
Thursday June 25, 2026 9:00am - 9:15am CEST
Hall D (Level -2)
  Keynote

9:15am CEST

Keynote: The Reinvention of Software Engineering
Thursday June 25, 2026 9:15am - 10:00am CEST
I don’t need to tell you that AI has changed software development forever. You know this. Whether you’re positive, negative or indifferent to this change, you can’t deny that the past 2 years have radically changed the role of the software developer.

As an industry we have been obsessed with velocity.

We wanted every second of “developer productivity” squeezed from every dev team and designed tools, processes and practices to create the right environment for that to happen.

The velocity is here. And we don’t know what to do with it.

Many of our tools, processes and practices simply don’t make sense any more. The “unblocking” of one bottleneck in the software development lifecycle has created new bottlenecks and pressure points. This is especially true for application security teams.

The seismic shift in software is only just getting started. I don’t offer a proven strategy to navigate this change, we are sailing these turbulent waters together. What I propose is that we go back to fundamentals, refocus on outcomes and evaluate our options for evolving our software development lifecycle, safely and securely.

Fundamentals like security and reliability are as important as ever, but how do we deliver on these commitments at the pace of AI coding? What practices are gone forever, and which do we need to keep?

In this talk I bring you these options, what I’ve seen work, what I’m ready to throw out, and most importantly, the things I will keep no matter what.
Speakers
avatar for Hannah Foxwell

Hannah Foxwell

Product Director, Snyk
With over a decade of experience in DevOps, DevSecOPs and Platform Engineering, Hannah Foxwell has always advocated for the human aspects of technology transformation and evolution. Hannah is relentlessly curious about the tools, technologies, processes and practices that make life... Read More →
Thursday June 25, 2026 9:15am - 10:00am CEST
Hall D (Level -2)

10:30am CEST

AI Explainability Score Card
Thursday June 25, 2026 10:30am - 11:15am CEST
AI is tightening its grip on security operations, but when no one can explain what a system is doing, accountability breaks down and attackers gain the edge. Regulations like the EU AI Act now require AI systems to be transparent, yet most organizations lack a concrete way to measure what “transparent” actually means. The AI Explainability Scorecard fills that gap by providing a fast, practical way to assess whether an AI system is traceable and defensible, scoring it on faithfulness, comprehensibility, consistency, accessibility, and operational clarity, including for LLM-based systems. The takeaway is clear: if you cannot explain the results of your AI, it is running your business, not your people.
Speakers
avatar for Michael Novack

Michael Novack

Solution Architect, Aiceberg

Michael is a product-minded security architect who loves turning tangled AI risks into clear, practical solutions. As Solution Architect at Aiceberg, he helps enterprises bake AI explainability and real-time monitoring straight into their systems, transforming real customer insights... Read More →
Thursday June 25, 2026 10:30am - 11:15am CEST
Hall D (Level -2)

11:30am CEST

Authorization Is Where Your App Goes to Lie
Thursday June 25, 2026 11:30am - 12:15pm CEST
Your authorization logic probably lives in code, while the rationale behind it lives only in people’s heads.

That’s why authorization breaks in familiar ways: a missing check, an incorrect assumption, a copied snippet that made sense in one endpoint but was entirely wrong for another.

This talk is about making authorization logic visible earlier, during design, so engineers have something concrete to implement and reviewers have something concrete to critique. We’ll walk through a lightweight, design-time template that turns “who should be able to do what” into a structured artifact that can later be translated into policy-as-code, tested, and enforced consistently.

No new tools required; the focus is on a design-time step that fits cleanly into architecture reviews and threat modeling, and makes authorization easier to get right.
Speakers
avatar for Eden Yardeni

Eden Yardeni

Senior AppSec Engineer

Eden Yardeni works in application security, and contributes to OWASP projects including ASVS. She previously worked as a full-stack developer, but moved into application security when she heard there would be cookies.    linkedin.com/in/eden-yardeni/
... Read More →
Thursday June 25, 2026 11:30am - 12:15pm CEST
Hall D (Level -2)

1:15pm CEST

The Map of Artificial Treasures: What to Automate in Security - and Why?
Thursday June 25, 2026 1:15pm - 2:00pm CEST
With the rise of AI, especially large language models, it seems every security workflow will soon be automated or heavily supported by automation - from LLM-powered threat-intelligence enrichment or compliance mappings to AI-written threat models, codefixes and complete CISO roadmaps. But which processes will truly benefit, and in which cases will AI just increase the risk of adding cost and complexity? As security managers or leaders, how can we determine where to focus our efforts and investments upfront?

This talk presents a practical framework for evaluating the effectiveness of AI-driven automation in application security and related fields. First, we explore how to identify processes that are strong candidates for automation based on criteria such as repeatability, return on investment, and risk tolerance. Then, we map typical security processes to AI approaches, including large language models (LLMs), traditional machine learning, retrieval-augmented generation (RAG), and hybrid systems.

We will learn how these solutions are applied to critical security areas, such as vulnerability management, secure software development, threat detection, and compliance. We will explore an AI Capability Map, industry benchmarks, and real-world examples, such as the use of RAG-powered chatbots for security guidance and LLMs for compliance analysis. Our goal is to help you determine where AI would be a good fit for your organization and where you would likely see measurable value when applying it, so that you can make informed decisions. Also, we will examine the available data: In which areas of the industry is value already being recognized? We explore potential pitfalls, from fragile LLM implementations to poor risk modeling, and discuss how to avoid wasting resources.

Using industry data, real-world experience, and structured criteria, this talk provides security leaders and practitioners with more guidance in this rapidly evolving field.
Speakers
avatar for Michael Helwig

Michael Helwig

Senior Security Consultant, secureIO GmbH

I am security consultant and founder of secureIO GmbH, a consulting company that focuses on building application security programs and consulting clients from different industries on secure software development and compliance. I am focussing on DevSecOps, security testing, AI automation... Read More →
Thursday June 25, 2026 1:15pm - 2:00pm CEST
Hall D (Level -2)

2:15pm CEST

Human Rights Threat Modeling
Thursday June 25, 2026 2:15pm - 3:00pm CEST
Security and privacy threat models are fundamental tools in AppSec, but in modern systems, such as Identity and Access Management (IAM) and AI, they fail to intercept a growing class of threats: those that do not compromise the system but produce harm to people.

In this talk, we show why traditional threat models fail to capture these problems and how the limitation is not technical but cognitive. Human rights concepts are too abstract for many technicians, just as security was for developers before Threat Modeling became a facilitated and shared practice.

Through a concrete use case on IAM - extendable directly to AI systems - we present an approach that integrates Threat Modeling and harm modeling through a structured facilitation process, supported by cards and serious games.

The goal is not to turn developers into human rights experts but to make these threats visible, debatable, and mitigable using familiar AppSec tools.
Speakers
avatar for Giovanni Corti

Giovanni Corti

Cybersecurity Researcher, FBK

Cybersecurity professional specializing in cyber threat intelligence and in threat modeling for security, privacy, and user safety in high-risk systems.
  linkedin.com/in/g-corti
... Read More →
avatar for Simone Onofri

Simone Onofri

Security Lead, W3C

Simone is the W3C Security Lead. He has 20+ years of expertise in red/blue Teaming and Web security. He has spoken at OWASP, TEDx, and other events and authored Attacking and Exploiting Modern Web Applications.    linkedin.com/in/simoneonofri
... Read More →
avatar for Luca Lumini

Luca Lumini

Executive Security Advisor

Executive Security Advisor with more than 20 years of consulting experience focusing on corporate cyber strategy and security risk advisory, as Chief Security Officer Luca has been leading the Security Strategy and AI Innovation team for the AXA International Markets region. He is... Read More →
Thursday June 25, 2026 2:15pm - 3:00pm CEST
Hall D (Level -2)

3:30pm CEST

AI and the Threat Modeling Manifesto: Conflicts, Failure Modes, and Better Patterns
Thursday June 25, 2026 3:30pm - 4:15pm CEST
AI is becoming increasingly embedded in threat modeling processes. Some organizations now claim that threat modeling can be performed entirely by AI. This appears to be a natural progression, given the growing use of AI in software development itself.

Before the current wave of AI adoption, the Threat Modeling Manifesto (TMM) was developed, drawing inspiration from the Agile Manifesto. It distilled years of practitioner experience in application security into a short, actionable document. The TMM emphasizes values such as a culture of finding and fixing design issues, people and collaboration over tools, and a journey of understanding rather than a static security snapshot.

This talk examines how AI-assisted threat modeling can diverge from these values through five recurring anti-patterns. These include treating AI as the hero threat modeler, de-emphasizing human collaboration and input, prioritizing snapshots over the journey of understanding, delegating creativity to AI, and favoring exhaustive enumeration over deliberate discussion.

The session then explores three silent failure modes that frequently emerge in the presence of these anti-patterns: hallucination, automation bias, and the illusion of completeness. Together, they produce threat models that appear finished and authoritative, while concealing subtle errors, weakening shared understanding and ownership, and failing to create the motivation needed for people to act.

Finally, the talk synthesizes emerging best practices observed across real-world AppSec teams. These include using AI as a facilitator rather than an authority, designing explicitly for disagreement and multiple viewpoints, and structuring processes that increase meaningful human participation and understanding.

Attendees will leave with a practical framework for adopting AI-assisted threat modeling that helps teams avoid silent failures, preserve human judgment and collaboration, and use AI to generate output that gets understood and acted upon.

Speakers
avatar for Vikramaditya Narayan

Vikramaditya Narayan

Creator of The Precogly Open Source Threat Modeling Platform
Vikramaditya Narayan is the creator of Precogly, an open-source, enterprise-grade threat modeling platform built for compliance-aware security teams. Previously, he designed the prototype for a YC-funded AI governance platform. Vikramaditya leads the Bangalore chapter of Threat Modeling... Read More →
Thursday June 25, 2026 3:30pm - 4:15pm CEST
Hall D (Level -2)
 
Friday, June 26
 

9:00am CEST

Opening Remarks
Friday June 26, 2026 9:00am - 9:15am CEST
Welcome to the OWASP Global AppSec EU 2026 conference! We are excited you are with us, not only to attend this amazing event, but also to celebrate our 25th anniversary!

Don't miss the opening remarks for the event as we welcome you and provide a few key details to provide you with a roadmap to a successful time with us!
Friday June 26, 2026 9:00am - 9:15am CEST
Hall D (Level -2)
  Keynote

9:15am CEST

Keynote: We Live in the Future: The Death and Rebirth of Application Security
Friday June 26, 2026 9:15am - 10:00am CEST

Speakers
avatar for Gadi Evron

Gadi Evron

Founder and CEO, Knostic
Gadi Evron is Founder and CEO at Knostic, an AI agent security company, CISO-in-Residence for AI at CSA, and chairs the [un]prompted conference. Previously, he founded Cymmetria (acquired), was the Israeli National Digital Authority CISO, founded the Israeli CERT, and headed PwC's... Read More →
Friday June 26, 2026 9:15am - 10:00am CEST
Hall D (Level -2)
  Keynote

10:30am CEST

From ASVS to APVS: What Changes When You Treat Privacy as a System Property?
Friday June 26, 2026 10:30am - 11:15am CEST
Privacy is increasingly expected to be “built in by design”, yet most privacy guidance remains legal, abstract, or disconnected from how systems are actually designed and reviewed. As a result, privacy is still treated as a compliance exercise rather than an engineering discipline.

In this talk, we share early lessons from the OWASP Privacy Project and our work on the Application Privacy Verification Standard (APVS). Drawing on familiar AppSec concepts such as ASVS, threat modeling, and weakness classification, we explore what changes when privacy is treated as a system property rather than a checkbox.

We discuss where traditional security controls fall short, how privacy risks can exist without attackers or breaches, and how we are translating high-level privacy principles into actionable guidance for architects and developers. This is not a finished standard, but a candid look at what works, what doesn’t, and where practitioner feedback is essential as the project evolves.
Speakers
avatar for Matthew Coles

Matthew Coles

Product Security Architect/Technologist

Matthew Coles is a Product Security Architect and Technologist with 20+ years experience working with business leaders and developers to secure hardware and software systems and processes. He is a technical contributor to community standard initiatives such as OpenSSF and OWASP, a... Read More →
avatar for Kim Wuyts

Kim Wuyts

Manager Cyber & Privacy, PwC Belgium

Dr. Kim Wuyts is a leading privacy engineer with over 15 years of experience in security and privacy. Before joining PwC Belgium as Manager Cyber & Privacy, Kim was a senior researcher at KU Leuven where she led the development and extension of LINDDUN, a popular privacy threat modeling... Read More →
avatar for Avi Douglen

Avi Douglen

Software Security Consultant, Bounce Security
Avi Douglen is the founder and CEO at Bounce Security, a boutique consultancy specializing in software security, where he spends a lot of time with development teams of all sizes. He helps them integrate security methodologies and products into their development processes, and often... Read More →
Friday June 26, 2026 10:30am - 11:15am CEST
Hall D (Level -2)

11:30am CEST

Phishing for Passkeys - An Analysis of WebAuthn and CTAP
Friday June 26, 2026 11:30am - 12:15pm CEST
WebAuthn was supposed to replace passwords on the web: uniform, secure, manageable authentication for everyone! One of its unique selling points was supposed to be the impossibility of phishing attacks. When Passkeys were introduced, some of WebAuthn's security principles were watered down in order to achieve some usability improvements and thus reach more widespread adoption.

This presentation discusses the security of Passkeys against phishing attacks. It explains the possibilities for an attacker to gain access to accounts secured with Passkeys using spear phishing, and what conditions must be met for this to happen. It also practically demonstrates such an attack and discusses countermeasures.

Participants will learn which WebAuthn security principles still apply to Passkeys and which do not. They will learn why Passkeys are no longer completely phishing-proof and how they can evaluate this consideration for their own use of Passkeys.
Speakers
avatar for Michael Kuckuk

Michael Kuckuk

Fullstack Developer, inovex

As a fullstack software developer, Michael's main expertise lies in simple software development. But since he is well aware that the happy path is the easy part, he's always had an interest for security and he's always been very security- and privacy-aware in his work. He enjoys developing... Read More →
Friday June 26, 2026 11:30am - 12:15pm CEST
Hall D (Level -2)

1:15pm CEST

AI-Generated Code vs Human Code. Who Really Writes More Vulnerabilities
Friday June 26, 2026 1:15pm - 2:00pm CEST
When AI coding tools entered mainstream development, the application security community reacted fast and loudly. Many warned that AI would dramatically increase vulnerabilities. The most common argument was simple and intuitive. AI models were trained on vast amounts of real-world code, including insecure and vulnerable code. Garbage in, garbage out. If AI learned from vulnerable code, it would inevitably reproduce those vulnerabilities at scale.

This claim quickly became accepted wisdom, despite the fact that almost no one could actually prove it.

This session presents a data-driven examination of that assumption. By correlating reported security vulnerabilities with automated line-level code attribution, we were able to determine whether a vulnerability originated in AI-generated code or human-written code. This allowed us to move the discussion from fear and intuition to measurable evidence.

The results are more nuanced and more interesting than the prevailing narrative suggests. In some scenarios, AI-generated code showed higher vulnerability density. In others, it performed comparably to, or even better than, human-written code. The differences are not accidental. They correlate strongly with the model used, the tooling, and how developers interact with AI, rather than AI usage alone.

This talk challenges the notion that AI coding is inherently insecure. It replaces the garbage-in, garbage-out argument with concrete data, identifies where the real risks actually emerge, and explains what this means for modern AppSec strategy. Attendees will leave with evidence they can use to recalibrate policies, controls, and conversations around AI-assisted development, without slowing teams down or relying on assumptions.
Speakers
avatar for Eitan Worcel

Eitan Worcel

CEO & Co-Founder, Mobb

Eitan Worcel is the co-founder and CEO of Mobb. He has close to 20 years of experience in application security, spanning hands-on software development, product leadership, and executive roles. Throughout his career, Eitan has worked closely with engineering and security teams to understand... Read More →
Friday June 26, 2026 1:15pm - 2:00pm CEST
Hall D (Level -2)

2:15pm CEST

Teaching AI Agents Like Guide Dogs: A Progressive Trust Framework
Friday June 26, 2026 2:15pm - 3:00pm CEST
Your AI agent has access to your database, your APIs, and your users' data. But would you give a new hire admin credentials on day one? We do this with AI agents constantly - deploying them with full system access before they've proven they won't hallucinate a DROP TABLE or leak sensitive data to a prompt injection attack.

Guide dog training programs solved this problem decades ago. They take untested puppies and transform them into autonomous agents trusted to make life-or-death decisions - through a systematic process of graduated trust. A guide dog doesn't get to navigate traffic until it's mastered basic commands. It doesn't work unsupervised until it's proven reliable across thousands of scenarios. And critically, it's trained in "intelligent disobedience" - knowing when to refuse a direct command because following it would cause harm.

In this talk, I'll introduce the Progressive Trust Framework - a practical approach to AI agent deployment inspired by 90+ years of service animal training. You'll learn how to implement graduated permission systems where agents earn expanded access through demonstrated reliability. We'll explore the "3 D's" testing methodology (Distance, Duration, Distraction) for validating agent behaviour before promotion. And we'll tackle the hardest problem: training agents that refuse harmful requests without becoming unhelpfully paranoid.

Whether you're building autonomous coding assistants, customer service bots, or internal automation tools, you'll leave with concrete patterns for deploying AI agents that earn trust instead of demanding it. Because the question isn't whether your AI agent will make mistakes - it's whether you've built the guardrails to catch them before they hit production.
Speakers
BD

Bodhisattva Das

Security Engineer, RUDRA Cybersecurity

Bodhisattva Das is a Security Engineer at Rudra Cybersecurity, focused on securing non-human identities, AI agents, and automated workloads across cloud environments. He specialises in open-source threat detection using Wazuh, and builds practical solutions for identity governance... Read More →
Friday June 26, 2026 2:15pm - 3:00pm CEST
Hall D (Level -2)

3:30pm CEST

Why IAM Remains a Challenge and What We Can Do About It
Friday June 26, 2026 3:30pm - 4:15pm CEST
Everyone expects Identity & Access Management to be a "set it and forget it" problem. But the reality looks quite different: the same challenges keep resurfacing, they are technically demanding, time-consuming, and frequently create friction between teams, ultimately resulting in significant costs. And the rise of AI agents makes it even worse.

Over the years, I explored these recurring issues, which led to a multi part blog series (https://www.innoq.com/en/blog/2025/07/whats-wrong-with-the-current-owasp-microservice-security-cheat-sheet/) published in 2025, initially aimed at updating the OWASP Microservice Security Cheat Sheet. My goal was to show how well known IAM building blocks can be combined into pragmatic, coherent, and operationally realistic solutions. That work eventually grew beyond the original scope and is becoming multiple new OWASP Cheat Sheets plus an entirely new architectural-level cheat sheet format.

In this talk I'll share the essence of the patterns and the strategies I identified and documented, show how to avoid the usual traps, and how to reduce IAM complexity in distributed systems to create the space to focus on what we're actually building - the product.
Speakers
avatar for Dimitrij Drus

Dimitrij Drus

Senior Consultant, INNOQ

I work as a Senior Consultant at INNOQ Germany GmbH, focusing on security architecture and the design of secure distributed systems. With a strong passion for security, I regularly lead training sessions to help others address modern (web) security challenges.    de.linkedin.c... Read More →
Friday June 26, 2026 3:30pm - 4:15pm CEST
Hall D (Level -2)
 
Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.