Loading…
Venue: Hall D (Level -2) clear filter
arrow_back View All Dates
Thursday, June 25
 

9:00am CEST

Opening Remarks
Thursday June 25, 2026 9:00am - 9:15am CEST
Welcome to the OWASP Global AppSec EU 2026 conference! We are excited you are with us, not only to attend this amazing event, but also to celebrate our 25th anniversary!

Don't miss the opening remarks for the event as we welcome you and provide a few key details to provide you with a roadmap to a successful time with us!
Thursday June 25, 2026 9:00am - 9:15am CEST
Hall D (Level -2)
  Keynote

9:15am CEST

Keynote: The Reinvention of Software Engineering
Thursday June 25, 2026 9:15am - 10:00am CEST
I don’t need to tell you that AI has changed software development forever. You know this. Whether you’re positive, negative or indifferent to this change, you can’t deny that the past 2 years have radically changed the role of the software developer.

As an industry we have been obsessed with velocity.

We wanted every second of “developer productivity” squeezed from every dev team and designed tools, processes and practices to create the right environment for that to happen.

The velocity is here. And we don’t know what to do with it.

Many of our tools, processes and practices simply don’t make sense any more. The “unblocking” of one bottleneck in the software development lifecycle has created new bottlenecks and pressure points. This is especially true for application security teams.

The seismic shift in software is only just getting started. I don’t offer a proven strategy to navigate this change, we are sailing these turbulent waters together. What I propose is that we go back to fundamentals, refocus on outcomes and evaluate our options for evolving our software development lifecycle, safely and securely.

Fundamentals like security and reliability are as important as ever, but how do we deliver on these commitments at the pace of AI coding? What practices are gone forever, and which do we need to keep?

In this talk I bring you these options, what I’ve seen work, what I’m ready to throw out, and most importantly, the things I will keep no matter what.
Speakers
avatar for Hannah Foxwell

Hannah Foxwell

Product Director, Snyk
With over a decade of experience in DevOps, DevSecOPs and Platform Engineering, Hannah Foxwell has always advocated for the human aspects of technology transformation and evolution. Hannah is relentlessly curious about the tools, technologies, processes and practices that make life... Read More →
Thursday June 25, 2026 9:15am - 10:00am CEST
Hall D (Level -2)

10:30am CEST

AI Explainability Score Card
Thursday June 25, 2026 10:30am - 11:15am CEST
AI is tightening its grip on security operations, but when no one can explain what a system is doing, accountability breaks down and attackers gain the edge. Regulations like the EU AI Act now require AI systems to be transparent, yet most organizations lack a concrete way to measure what “transparent” actually means. The AI Explainability Scorecard fills that gap by providing a fast, practical way to assess whether an AI system is traceable and defensible, scoring it on faithfulness, comprehensibility, consistency, accessibility, and operational clarity, including for LLM-based systems. The takeaway is clear: if you cannot explain the results of your AI, it is running your business, not your people.
Speakers
avatar for Michael Novack

Michael Novack

Solution Architect, Aiceberg

Michael is a product-minded security architect who loves turning tangled AI risks into clear, practical solutions. As Solution Architect at Aiceberg, he helps enterprises bake AI explainability and real-time monitoring straight into their systems, transforming real customer insights... Read More →
Thursday June 25, 2026 10:30am - 11:15am CEST
Hall D (Level -2)

11:30am CEST

Authorization Is Where Your App Goes to Lie
Thursday June 25, 2026 11:30am - 12:15pm CEST
Your authorization logic probably lives in code, while the rationale behind it lives only in people’s heads.

That’s why authorization breaks in familiar ways: a missing check, an incorrect assumption, a copied snippet that made sense in one endpoint but was entirely wrong for another.

This talk is about making authorization logic visible earlier, during design, so engineers have something concrete to implement and reviewers have something concrete to critique. We’ll walk through a lightweight, design-time template that turns “who should be able to do what” into a structured artifact that can later be translated into policy-as-code, tested, and enforced consistently.

No new tools required; the focus is on a design-time step that fits cleanly into architecture reviews and threat modeling, and makes authorization easier to get right.
Speakers
avatar for Eden Yardeni

Eden Yardeni

Senior AppSec Engineer

Eden Yardeni works in application security, and contributes to OWASP projects including ASVS. She previously worked as a full-stack developer, but moved into application security when she heard there would be cookies.    linkedin.com/in/eden-yardeni/
... Read More →
Thursday June 25, 2026 11:30am - 12:15pm CEST
Hall D (Level -2)

1:15pm CEST

The Map of Artificial Treasures: What to Automate in Security - and Why?
Thursday June 25, 2026 1:15pm - 2:00pm CEST
With the rise of AI, especially large language models, it seems every security workflow will soon be automated or heavily supported by automation - from LLM-powered threat-intelligence enrichment or compliance mappings to AI-written threat models, codefixes and complete CISO roadmaps. But which processes will truly benefit, and in which cases will AI just increase the risk of adding cost and complexity? As security managers or leaders, how can we determine where to focus our efforts and investments upfront?

This talk presents a practical framework for evaluating the effectiveness of AI-driven automation in application security and related fields. First, we explore how to identify processes that are strong candidates for automation based on criteria such as repeatability, return on investment, and risk tolerance. Then, we map typical security processes to AI approaches, including large language models (LLMs), traditional machine learning, retrieval-augmented generation (RAG), and hybrid systems.

We will learn how these solutions are applied to critical security areas, such as vulnerability management, secure software development, threat detection, and compliance. We will explore an AI Capability Map, industry benchmarks, and real-world examples, such as the use of RAG-powered chatbots for security guidance and LLMs for compliance analysis. Our goal is to help you determine where AI would be a good fit for your organization and where you would likely see measurable value when applying it, so that you can make informed decisions. Also, we will examine the available data: In which areas of the industry is value already being recognized? We explore potential pitfalls, from fragile LLM implementations to poor risk modeling, and discuss how to avoid wasting resources.

Using industry data, real-world experience, and structured criteria, this talk provides security leaders and practitioners with more guidance in this rapidly evolving field.
Speakers
avatar for Michael Helwig

Michael Helwig

Senior Security Consultant, secureIO GmbH

I am security consultant and founder of secureIO GmbH, a consulting company that focuses on building application security programs and consulting clients from different industries on secure software development and compliance. I am focussing on DevSecOps, security testing, AI automation... Read More →
Thursday June 25, 2026 1:15pm - 2:00pm CEST
Hall D (Level -2)

2:15pm CEST

Human Rights Threat Modeling
Thursday June 25, 2026 2:15pm - 3:00pm CEST
Security and privacy threat models are fundamental tools in AppSec, but in modern systems, such as Identity and Access Management (IAM) and AI, they fail to intercept a growing class of threats: those that do not compromise the system but produce harm to people.

In this talk, we show why traditional threat models fail to capture these problems and how the limitation is not technical but cognitive. Human rights concepts are too abstract for many technicians, just as security was for developers before Threat Modeling became a facilitated and shared practice.

Through a concrete use case on IAM - extendable directly to AI systems - we present an approach that integrates Threat Modeling and harm modeling through a structured facilitation process, supported by cards and serious games.

The goal is not to turn developers into human rights experts but to make these threats visible, debatable, and mitigable using familiar AppSec tools.
Speakers
avatar for Giovanni Corti

Giovanni Corti

Cybersecurity Researcher, FBK

Cybersecurity professional specializing in cyber threat intelligence and in threat modeling for security, privacy, and user safety in high-risk systems.
  linkedin.com/in/g-corti
... Read More →
avatar for Simone Onofri

Simone Onofri

Security Lead, W3C

Simone is the W3C Security Lead. He has 20+ years of expertise in red/blue Teaming and Web security. He has spoken at OWASP, TEDx, and other events and authored Attacking and Exploiting Modern Web Applications.    linkedin.com/in/simoneonofri
... Read More →
avatar for Luca Lumini

Luca Lumini

Executive Security Advisor

Executive Security Advisor with more than 20 years of consulting experience focusing on corporate cyber strategy and security risk advisory, as Chief Security Officer Luca has been leading the Security Strategy and AI Innovation team for the AXA International Markets region. He is... Read More →
Thursday June 25, 2026 2:15pm - 3:00pm CEST
Hall D (Level -2)

3:30pm CEST

AI and the Threat Modeling Manifesto: Conflicts, Failure Modes, and Better Patterns
Thursday June 25, 2026 3:30pm - 4:15pm CEST
AI is becoming increasingly embedded in threat modeling processes. Some organizations now claim that threat modeling can be performed entirely by AI. This appears to be a natural progression, given the growing use of AI in software development itself.

Before the current wave of AI adoption, the Threat Modeling Manifesto (TMM) was developed, drawing inspiration from the Agile Manifesto. It distilled years of practitioner experience in application security into a short, actionable document. The TMM emphasizes values such as a culture of finding and fixing design issues, people and collaboration over tools, and a journey of understanding rather than a static security snapshot.

This talk examines how AI-assisted threat modeling can diverge from these values through five recurring anti-patterns. These include treating AI as the hero threat modeler, de-emphasizing human collaboration and input, prioritizing snapshots over the journey of understanding, delegating creativity to AI, and favoring exhaustive enumeration over deliberate discussion.

The session then explores three silent failure modes that frequently emerge in the presence of these anti-patterns: hallucination, automation bias, and the illusion of completeness. Together, they produce threat models that appear finished and authoritative, while concealing subtle errors, weakening shared understanding and ownership, and failing to create the motivation needed for people to act.

Finally, the talk synthesizes emerging best practices observed across real-world AppSec teams. These include using AI as a facilitator rather than an authority, designing explicitly for disagreement and multiple viewpoints, and structuring processes that increase meaningful human participation and understanding.

Attendees will leave with a practical framework for adopting AI-assisted threat modeling that helps teams avoid silent failures, preserve human judgment and collaboration, and use AI to generate output that gets understood and acted upon.

Speakers
avatar for Vikramaditya Narayan

Vikramaditya Narayan

Creator of The Precogly Open Source Threat Modeling Platform
Vikramaditya Narayan is the creator of Precogly, an open-source, enterprise-grade threat modeling platform built for compliance-aware security teams. Previously, he designed the prototype for a YC-funded AI governance platform. Vikramaditya leads the Bangalore chapter of Threat Modeling... Read More →
Thursday June 25, 2026 3:30pm - 4:15pm CEST
Hall D (Level -2)
 
Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.
Filtered by Date -