Loading…
arrow_back View All Dates
Thursday, June 25
 

7:45am CEST

Women in AppSec Breakfast (Sign up Required)
Thursday June 25, 2026 7:45am - 8:45am CEST
Must already be registered for the conference and sign up for breakfast is required.

Come and enjoy a breakfast committeed to making conference friends and friends for life (AKA - professioinal networking) at the Women in AppSec Breakfast co-hosted by Tanya Janca, Juliane Reimann, Kim Wyuts, and Marisa Fagan.

RSVP now to enjoy great food, pick up your challenge coin early, and walk through the expo hall, if you choose, to start tackling the expo passport program and win prizes.
Thursday June 25, 2026 7:45am - 8:45am CEST
Terrace G of Austria Center

8:30am CEST

Coffee/tea
Thursday June 25, 2026 8:30am - 9:00am CEST
Thursday June 25, 2026 8:30am - 9:00am CEST
Expo Hall X1

8:30am CEST

OWASP Official Store: Come explore books, games and merch (or Explore CyberSec Games, OWASP books and official merch)
Thursday June 25, 2026 8:30am - 4:00pm CEST
Come visit our table in the Expo Hall for books, games, and merch
Thursday June 25, 2026 8:30am - 4:00pm CEST
  Bonus Track

9:00am CEST

Opening Remarks
Thursday June 25, 2026 9:00am - 9:15am CEST
Welcome to the OWASP Global AppSec EU 2026 conference! We are excited you are with us, not only to attend this amazing event, but also to celebrate our 25th anniversary!

Don't miss the opening remarks for the event as we welcome you and provide a few key details to provide you with a roadmap to a successful time with us!
Thursday June 25, 2026 9:00am - 9:15am CEST
Hall D (Level -2)
  Keynote

9:15am CEST

Keynote: The Reinvention of Software Engineering
Thursday June 25, 2026 9:15am - 10:00am CEST
I don’t need to tell you that AI has changed software development forever. You know this. Whether you’re positive, negative or indifferent to this change, you can’t deny that the past 2 years have radically changed the role of the software developer.

As an industry we have been obsessed with velocity.

We wanted every second of “developer productivity” squeezed from every dev team and designed tools, processes and practices to create the right environment for that to happen.

The velocity is here. And we don’t know what to do with it.

Many of our tools, processes and practices simply don’t make sense any more. The “unblocking” of one bottleneck in the software development lifecycle has created new bottlenecks and pressure points. This is especially true for application security teams.

The seismic shift in software is only just getting started. I don’t offer a proven strategy to navigate this change, we are sailing these turbulent waters together. What I propose is that we go back to fundamentals, refocus on outcomes and evaluate our options for evolving our software development lifecycle, safely and securely.

Fundamentals like security and reliability are as important as ever, but how do we deliver on these commitments at the pace of AI coding? What practices are gone forever, and which do we need to keep?

In this talk I bring you these options, what I’ve seen work, what I’m ready to throw out, and most importantly, the things I will keep no matter what.
Speakers
avatar for Hannah Foxwell

Hannah Foxwell

Product Director, Snyk
With over a decade of experience in DevOps, DevSecOPs and Platform Engineering, Hannah Foxwell has always advocated for the human aspects of technology transformation and evolution. Hannah is relentlessly curious about the tools, technologies, processes and practices that make life... Read More →
Thursday June 25, 2026 9:15am - 10:00am CEST
Hall D (Level -2)

10:00am CEST

AM Break in Expo Hall
Thursday June 25, 2026 10:00am - 10:30am CEST

Thursday June 25, 2026 10:00am - 10:30am CEST

10:05am CEST

Hands-On: Building Security Guardrails for AI-Generated Code
Thursday June 25, 2026 10:05am - 12:05pm CEST
AI-assisted development is now responsible for a significant and growing portion of production code. However, most AppSec programs still treat AI as an external input to be scanned after code is written, rather than as a system that can be guided to produce safer code up front.

In this Practical On-Demand session, participants will explore a secure-by-construction approach to AI coding using Cursor-style rules and hooks. The POD is structured around short, repeatable activities rather than a linear workshop.
Speakers
avatar for David Archer

David Archer

Solution Architect, Endor Labs

David is a long-time software practitioner who has spent the last two decades building, breaking, and fixing software across development, product, and consulting roles. After repeatedly seeing security treated as an afterthought in fast-moving teams, he shifted full-time into application... Read More →
Thursday June 25, 2026 10:05am - 12:05pm CEST
Room -2.92 (Level -2)

10:05am CEST

Teaching Security Concepts Using Physical Analogies
Thursday June 25, 2026 10:05am - 12:05pm CEST
Understanding security fundamentals doesn’t have to be dry or abstract. In this interactive CF‑Pod, you’ll explore the core principles of confidentiality, integrity, and availability through surprising physical demonstrations and simple “magic‑like” activities that make each concept intuitive and memorable.

Each station focuses on one security principle and offers a short, hands‑on challenge that transforms an abstract idea into something you can see, touch, and explain to others. You can drop in for 10–15 minutes, try an activity, and walk away with a clear, practical analogy you can use in real‑world conversations with teammates and stakeholders.

Whether you're new to security or looking for better ways to teach it, this session will give you fun, effective tools for communicating the foundations of secure systems.
Speakers
MD

Mariia Denysenko

Cybersecurity Governance & Training Professional in IT, AI, and OT

Mariia is a cybersecurity governance and compliance professional with experience spanning IT security, AI security, and OT security. She focuses on developing secure processes, enabling teams, and translating complex security requirements into clear, actionable guidance.

Her backg... Read More →
Thursday June 25, 2026 10:05am - 12:05pm CEST
Room -2.92 (Level -2)

10:05am CEST

The Old But Unforgettable Key
Thursday June 25, 2026 10:05am - 12:05pm CEST
Application security failures often stem from small, everyday oversights that quietly accumulate into serious risk. This Practical On-Demand (POD) activity lets participants explore how those issues surface in real applications by actively engaging with a deliberately vulnerable web app.

Attendees can drop in at any time and participate in a self-paced, Capture the Flag (CTF) style challenge centred on investigation, experimentation, and problem solving. Starting from a minimal application with limited guidance, participants uncover and connect security weaknesses to progressively increase their level of access.

The activity is designed to be accessible to all experience levels. Newcomers can engage with individual challenges and learn core AppSec concepts, while more experienced practitioners can pursue deeper exploration and more complex exploitation paths. All scenarios are inspired by issues commonly encountered in real world development environments.

Facilitators are present throughout the session to support participants, answer questions, and provide short, optional walkthroughs for those without laptops. The emphasis remains on doing, discovery, and practical takeaways, ensuring participants leave with a stronger intuition for identifying risk and concrete guidance they can apply in their own applications.
Speakers
avatar for Raul Cicos

Raul Cicos

Security Consultant, Intruder

Raul is an experienced information security professional specialising in offensive security. He brings deep expertise across the full penetration testing lifecycle, from reconnaissance and vulnerability analysis to exploitation and clear, actionable reporting. His work focuses on... Read More →
TS

Tom Steer

Security Consultant, Intruder

Tom is an experienced security professional focused on offensive security, conducting high-quality penetration tests and identifying vulnerabilities across systems and applications. In his free time, he designs and hosts Capture The Flag (CTF) challenges using them to deepen his skills... Read More →
Thursday June 25, 2026 10:05am - 12:05pm CEST
Room -2.92 (Level -2)

10:30am CEST

OWASP masCon - Introduction by OWASP MAS team to MAS Con
Thursday June 25, 2026 10:30am - 10:35am CEST

Speakers
avatar for Carlos Holguera

Carlos Holguera

OWASP Mobile App Security (MAS): MASVS, MASWE and MASTG, NowSecure
Carlos is a principal mobile security research engineer working with NowSecure and one of the core project leaders and authors of the OWASP Mobile Security Testing Guide (MASTG) and OWASP Mobile Application Security Verification Standard (MASVS), the industry standard for mobile app... Read More →
avatar for Sven Schleier

Sven Schleier

Co-Founder, Bai7 GmbH
Sven is a co-founder of Bai7 GmbH in Austria, which is specialized in trainings and advisory. He has expertise in cloud security, offensive security engagements (Penetration Testing) and Application Security, notably in guiding software development teams across Mobile and Web Applications... Read More →
Thursday June 25, 2026 10:30am - 10:35am CEST
Room -2.33 (Level -2)

10:30am CEST

OpenCRE.org: Uniting all standards and guidelines
Thursday June 25, 2026 10:30am - 11:00am CEST
In security, it is important to understand the whole chain: from regulation to business risk, to requirement, to code example, to vulnerability, to test method, to tool configurations. However, so far there hasn’t been a solid way to interconnect standards, documentation, and tooling. Standards writers often work in isolation, and tooling authors rightly focus on quality results instead of comprehensive information about those results.

The open source initiative OpenCRE.org connects all these sources of information: It links topics across multiple standards, including the Top 10, ASVS, Pro-active controls, Testing guide, Cheat sheets, SAMM, SSDF, ISO27001, CSA CCMv3, CWE, CAPEC, PCI-DSS, NIST 800-53 and 63b. It further links code samples and offensive tooling configurations or rules. That way it serves as a universal translator, to connect every role involved: executive, compliance officer, procurement, architect, developer,and tester.

This talk takes you through how openCRE.org works, how we have brought all these standards together, how we used AI in a revolutionary way, and how you can benefit in your work as a manager, builder, breaker, buyer, or standard maker!

The intended audience for this talk is anyone involved with Application Security and looking for an easy-to-use guide, mapping standards to regulations to code and configurations.
Speakers
avatar for Rob van der Veer

Rob van der Veer

Chief AI Officer, Software Improvement Group
Rob van der Veer is an AI pioneer with 33 years of AI experience, specializing in engineering, security and privacy. He is the lead author of the ISO/IEC 5338 standard on AI lifecycle, contributor to OWASP SAMM, co-founder of OWASP's digital bridge for security standards OpenCRE... Read More →
Thursday June 25, 2026 10:30am - 11:00am CEST
Room -2.82 (Level 2)

10:30am CEST

Why Isn't the Fix in My Container? Tracking CVE Propagation Across 10,000 Projects
Thursday June 25, 2026 10:30am - 11:15am CEST
We analyzed CVE remediation patterns across 10,000 open source projects to uncover a critical problem: vulnerabilities fixed upstream often take weeks or months to reach downstream containers. This lag creates massive security exposure windows in Kubernetes environments.

In this talk, we'll present our findings showing how CVE fixes flow (or stall) across ecosystem layers, from upstream projects to package managers to base images to final containers. You'll see real metrics on remediation delays, and the compounding effect of layered dependencies.

But we won't stop at the problem. The second half focuses on practical solutions. From automated patch backporting to in-place image patching with tools like Copa. You'll learn how to build workflows that dramatically reduce MTTR, including dependency automation patterns and risk-based prioritization.

Attendees will leave with both a data-driven understanding of the CVE remediation challenge and a practical playbook for fixing it.
Speakers
avatar for Lior Kaplan

Lior Kaplan

Open Source evangelist, Open Source Security expert, Kaplan Open Source
As a Linux sysadmin for many years, Kaplan has being focused Open Source & Security from various perspectives - upstream projects, the Linux distributions and the DevOps / platform engineering teams who maintain the infrastructure.
Kaplan is a long time Open Source community membe... Read More →
avatar for Mor Weinberger

Mor Weinberger

Software Architect, Echo

Mor is a Software Architect specializing in cloud-native security and software supply chain resilience. His work focuses on designing scalable systems to detect and mitigate emerging threats across modern cloud environments. Over the years, he has identified issues ranging from unsecured... Read More →
Thursday June 25, 2026 10:30am - 11:15am CEST
Hall K1 (Level -2)

10:30am CEST

Builders & Breakers Part II: Securing Agentic AI After the Death of LLM Wrappers
Thursday June 25, 2026 10:30am - 11:15am CEST
Last year at OWASP Global AppSec Barcelona, we showed how to break and defend LLM-integrated apps: (indirect) prompt injection, jailbreaks, data poisoning. And what practical controls actually worked in production. But the game has changed.

This follow-up talk picks up where we left off, focusing on the next generation of LLM-driven systems: agentic AI and e.g. MCP (Model Context Protocol) & A2A (Agent2Agent). These systems combine LLMs with tools, memory, plugins, APIs, and planning loops, making them far more powerful, and also far more fragile.

We’ll walk through how this new architecture has shifted the attack surface, and why last year’s defences (input validation, injection prevention) don’t hold up anymore. Expect real-world attack paths: memory poisoning, tool misuse, and agent goal hijacking. Then we’ll show you what works: “Zero Trust”-style isolation, sandboxing tool execution, runtime plan validation, and defence patterns that are actually deployable.

This is not a theoretical talk. It’s a two-speaker format - builder and breaker - based on real-world incidents, internal and external red teaming, and live demos. If you’re building, securing, or reviewing AI-driven systems that do more than just chat, this is the session to see what’s coming and how to stay ahead.
Speakers
avatar for Javan Rasokat

Javan Rasokat

Senior Application Security Specialist, Sage

Javan is a DevOps Security Specialist at Sage, where he joined six years ago to lead Product Security for Central Europe and now supports products globally, contributing on the standardisation of security controls. He discovered his passion for security early in his career while identifying... Read More →
avatar for Rico Komenda

Rico Komenda

Senior Security Consultant

Rico is a senior product security engineer. His main security areas are in application security, cloud security, offensive security and AI security.

For him, general security intelligence in various aspects is a top priority. Today’s security world is constantly changing and you... Read More →
Thursday June 25, 2026 10:30am - 11:15am CEST
Hall G1 (Level -2)

10:30am CEST

AI Explainability Score Card
Thursday June 25, 2026 10:30am - 11:15am CEST
AI is tightening its grip on security operations, but when no one can explain what a system is doing, accountability breaks down and attackers gain the edge. Regulations like the EU AI Act now require AI systems to be transparent, yet most organizations lack a concrete way to measure what “transparent” actually means. The AI Explainability Scorecard fills that gap by providing a fast, practical way to assess whether an AI system is traceable and defensible, scoring it on faithfulness, comprehensibility, consistency, accessibility, and operational clarity, including for LLM-based systems. The takeaway is clear: if you cannot explain the results of your AI, it is running your business, not your people.
Speakers
avatar for Michael Novack

Michael Novack

Solution Architect, Aiceberg

Michael is a product-minded security architect who loves turning tangled AI risks into clear, practical solutions. As Solution Architect at Aiceberg, he helps enterprises bake AI explainability and real-time monitoring straight into their systems, transforming real customer insights... Read More →
Thursday June 25, 2026 10:30am - 11:15am CEST
Hall D (Level -2)

10:30am CEST

Why AppSec Fails at Scale (and How to Fix It)
Thursday June 25, 2026 10:30am - 11:15am CEST
As organizations grow, application security often becomes more painful but not more effective. Vulnerabilities recur, engineers feel blocked, and security teams struggle to scale. These failures are rarely caused by careless engineers or missing tools — they are symptoms of broken systems.

In this talk, we examine why AppSec fails to scale, particularly in growing teams and startups, and why adding more guidelines, scanners, or training usually makes the problem worse. Instead, let's approach application security as a sociotechnical system shaped by incentives, defaults, ownership boundaries, and feedback loops.

In this session, you will hear about common failure modes such as compliance-driven security, misplaced responsibility, and metrics that reward activity instead of risk reduction. Then hear about practical strategies for fixing the system: shifting security into platforms and defaults, reducing cognitive load for engineers, and aligning AppSec goals with delivery pressure and business constraints.
Speakers
avatar for Eduard Thamm

Eduard Thamm


Eduard is a technical leader with a background in distributed systems, platform engineering, and security. He works in regulated environments, designing Kubernetes-based platforms where reliability, compliance, and developer experience must coexist. His focus is on architecture under... Read More →
Thursday June 25, 2026 10:30am - 11:15am CEST
Hall K2 (Level -2)

10:30am CEST

Scanning Agentic AI Systems: Beyond Traditional LLM Red Teaming
Thursday June 25, 2026 10:30am - 11:15am CEST
As agentic AI systems evolve from simple LLM interfaces into autonomous and multi-agent workflows. Given the high autonomy of agentic AI systems, there is a growing need to perform a detailed risk assessment, which means traditional LLM-focused red teaming is no longer enough. Unlike standalone LLMs with text input and output, agentic systems interact with tools, memory, external data, and other agents, creating many new attack surfaces. Attacks may be introduced through emails, tool descriptions, or environmental content, and their impact can go beyond model responses to affect system behavior, planning, and perform harmful real-world actions.

In this talk, we share our hands-on journey building a comprehensive red teaming scanning solution tailored for agentic AI systems. We begin by analyzing why current scanning tools fall short, specifically their emphasis on structured components (e.g., protocols like MCP, A2A, and Skills) while overlooking unstructured and highly dynamic attack vectors where most real-world risks emerge. We then walk through the technical challenges of simulating realistic attacks without harming production environments, handling the diversity of agent architectures, frameworks, and agency-levels, and designing scanners that generalize across heterogeneous systems.

We present a practical full scanning pipeline that creates a novel holistic solution, including sandboxing and emulation strategies, automated system discovery pipelines, abstraction-based scanning mechanisms, and a risk-aware robustness scoring framework that goes beyond binary attack success. Throughout the talk, we highlight concrete lessons learned, trade-offs between cost and reliability, and real examples of agent-specific vulnerabilities.
We conclude with a concrete end-to-end scanning workflow and discuss open challenges such as adaptive scanner generation and black-box agent discovery. Attendees will leave with a deep understanding of why agentic AI requires fundamentally new red teaming methodologies and with actionable techniques for securing real-world autonomous AI systems.
Speakers
avatar for Roman Vainshtein

Roman Vainshtein

Research Director, GenAI Trust, Fujitsu Research of Europe

I am Research Director of the Generative AI Trust and Security Research team at Fujitsu Research of Europe, where I lead efforts to enhance the security, trustworthiness, and resilience of Generative AI systems. My work focuses on bridging the gap between AI security, red-teaming... Read More →
avatar for Amit Giloni

Amit Giloni

Principal Researcher, GenAI Trust team, Fujitsu Research

Dr. Amit Giloni is a Principal Researcher at Fujitsu Research of Europe, where she is part of the GenAI Trust team.
Her research spans multiple areas of machine learning, including classical ML, deep learning, generative AI, and agentic AI. She focuses on key challenges in trustworthy AI, such as bias and fairness, explainability, adversarial machine learning, robustness to abnormalities, and confidentiality... Read More →
avatar for Roy Betser

Roy Betser

Senior Researcher, GenAI Trust team, Fujitsu Research

Roy Betser is a PhD candidate int he Technion and an AI security senior researcher in Fujitsu Research of Europe, where heis part of the GenAI Trust team. His research focuses on analyzing representation and embedding spaces in foundation models and on developing practical trust and... Read More →
Thursday June 25, 2026 10:30am - 11:15am CEST
Hall G2 (Level -2)
  Testing

10:30am CEST

Meet the Mentor
Thursday June 25, 2026 10:30am - 11:45am CEST
One more Global AppSec event.
You’re taking training, you’re running between sessions, you’re connecting with people over coffee or when talking to a vendor.

What if you could use the event to also meet a potential mentor, or mentee?
What if you could connect face to face with someone who may help take your career to the next level, or that you can help and make a difference with?

We are inviting you to an OWASP Global AppSec activity: Meet The Mentor! A speed-dating activity between potential mentors and mentees where you can come face to face and see if it “clicks”, start a conversation, and see if it is a match.
Speakers
avatar for Izar Tarandach

Izar Tarandach

Sr. Principal Architect, SiriusXM
Long-time security practitioner, Sr. Principal Security Architect at SiriusXM, previouslyDatadog,  at Squarespace, Bridgewater Associates to DellEMC via RSA, Autodesk, startup founder, investor and advisor. Founding member of the IEEE Center for Secure Design, holds a masters degree... Read More →
avatar for Avi Douglen

Avi Douglen

Software Security Consultant, Bounce Security
Avi Douglen is the founder and CEO at Bounce Security, a boutique consultancy specializing in software security, where he spends a lot of time with development teams of all sizes. He helps them integrate security methodologies and products into their development processes, and often... Read More →
Thursday June 25, 2026 10:30am - 11:45am CEST
  Bonus Track

10:35am CEST

OWASP masCon - Let's get frooky: Structured Mobile DAST with Frida
Thursday June 25, 2026 10:35am - 11:25am CEST
Mobile application penetration tests can be challenging. In order to find vulnerabilities in the OWASP MAS Testing Profile L2, security testers have to simulate attacks on compromised devices. When apps protect themselves with advanced static and dynamic hardening techniques, security testers often rely on instrumentation in order to assess the security of the app at runtime.

This talk will present some of these challenges as seen in real world mobile apps and then present “frooky”, a Frida-powered hook runner based on structured I/O. This tool was consolidated together with OWASP MAS leadership and released as a standalone project for OWASP MASTG. We will show you what it can do, how it was developed and how you can use it for any mobile app penetration testing efforts in general.
Speakers
SB

Stefan Bernhardsgrütter

Lead Security Tester, Redguard
As a Security Tester at Redguard, Stefan puts a wide variety of IT systems, networks and applications to the test. He has an M.Sc. in Engineering with focus on IT-Security and more than 10 years experience in this field. At Redguard he is responsible for developing and maintaining... Read More →
avatar for Carlos Holguera

Carlos Holguera

OWASP Mobile App Security (MAS): MASVS, MASWE and MASTG, NowSecure
Carlos is a principal mobile security research engineer working with NowSecure and one of the core project leaders and authors of the OWASP Mobile Security Testing Guide (MASTG) and OWASP Mobile Application Security Verification Standard (MASVS), the industry standard for mobile app... Read More →
Thursday June 25, 2026 10:35am - 11:25am CEST
Room -2.33 (Level -2)

11:00am CEST

OWASP AI Testing Guide in Practice: Securing LLM Applications
Thursday June 25, 2026 11:00am - 11:30am CEST
This talk presents the OWASP AI Testing Guide as a practical extension of traditional application security methodologies for AI and LLM-based systems. It shows how AppSec engineers can systematically identify, model, and test AI-specific risks using an OWASP-aligned approach, rather than relying on ad hoc assessments or vendor claims.

The session starts with an architecture-driven threat modeling process for AI systems, decomposing LLM applications into application, model, data, and infrastructure layers. Using OWASP LLM Top 10 and threat modeling of AI System and Agent AI architectures, the talk demonstrates how AI attack surfaces and threat scenarios can be identified and mapped to concrete security risks. These threats are then mapped to testable hypotheses using the OWASP AI Testing Guide, bridging the gap between threat modeling and hands-on security testing.

Through real-world examples, the talk explores how common AI vulnerabilities manifest in practice, including prompt injection, jailbreak techniques, sensitive data exposure, model misalignment, hallucinations, RAG pipeline abuse, and agent workflow exploitation.
The audience will see how these issues can be tested in LLM-based applications using OWASP AITG test cases, OWASP LLM Top 10 payloads, and common AppSec and AI toolings.

The session concludes by showing how AI security testing can be integrated into MLSecOps. It highlights how organizations can move from intuition-based AI security to evidence-based risk validation, positioning OWASP AITG as a foundational methodology for securing AI systems within modern application security programs.

The key message of the talk is that trustworthy AI is not achieved through design assumptions or policy statements, but through systematic, repeatable testing aligned with OWASP principles.
Speakers
avatar for Matteo Meucci

Matteo Meucci

CEO, Synapsed.ai
Throughout his career, Matteo has played a pivotal role in the global cybersecurity community, particularly through his involvement with OWASP. He is the founder and leader of OWASP Italy and has contributed to the creation of foundational open-source projects such as the OWASP Testing Guide and the Software Security 5D Framework, establishing security standards that are now widely adopted worldwide.In the field of AI... Read More →
avatar for Marco Morana

Marco Morana

Field CISO- Head of Application & Product Security Architecture, Avocado Systems Inc.
Marco Morana is the Field CISO at Avocado Systems Inc., specializing in threat modeling automation and Zero Trust Architecture for financial services. With over 15 years of leadership experience, he has held senior security roles at JP Morgan Chase and Citi, securing financial applications... Read More →
Thursday June 25, 2026 11:00am - 11:30am CEST
Room -2.82 (Level 2)

11:30am CEST

OWASP masCon - Unveiling The Internals From Multiplatform Mobile Runtimes
Thursday June 25, 2026 11:30am - 11:55am CEST
Flutter, React and Unity are the main multiplatform runtimes of choice when developing mobile applications for iOS and Android. We will cover the main characteristics, starting with the programming language associated with the framework, the ecosystem, the toolchains and showcase some clever low level details in their implementations. Recovering code and data from the final release binaries with the help of the opensource plugins for radare2.
Speakers
avatar for Sergi Alvarez

Sergi Alvarez

Mobile Security Research Engineer, NowSecure
Pancake is a mobile security research engineer at NowSecure. It has more than 25 years of experience in the reverse engineering and security fields. Author and maintainer of tools like radare2, r2frida and other plugins around the radare ecosystem, he began working as a forensic analyst... Read More →
Thursday June 25, 2026 11:30am - 11:55am CEST
Room -2.33 (Level -2)

11:30am CEST

OWASP AI Security Verification Standard (AISVS)
Thursday June 25, 2026 11:30am - 12:00pm CEST
AI systems face threats that traditional application security standards weren't built to address. This includes prompt injection, training data poisoning, model extraction, agentic autonomy risks, and more. The OWASP AI Security Verification Standard (AISVS) provides 400+ testable requirements across 14 chapters, covering everything from input validation and model lifecycle management to MCP protocol security and autonomous agent controls. This lightning talk introduces the standard's structure, its three verification levels, and how security teams can use it today to assess and harden AI-powered applications. We'll show where AISVS fits alongside existing frameworks like ASVS, NIST AI RMF, and ISO 42001 and where it deliberately doesn't overlap.
Speakers
avatar for Otto Sulin

Otto Sulin

Head of Security, Supermetrics


avatar for Russ Memisyazici

Russ Memisyazici

Aras “Russ” Memişyazıcı, M.Sc. is a senior technology and architecture leader specializing in AI security, cloud transformation, application security, and enterprise modernization. He currently serves as a Global Head of Reference Architecture at Aon, where his work focuses... Read More →
avatar for Jim Manico

Jim Manico

Founder and CEO, Manicode Security
Jim Manico is the founder of Manicode Security, where he specializes in training software developers on secure coding and security engineering. He is actively involved in multiple ventures, serving as an investor/advisor for companies like 10Security, MergeBase, Nucleus Security... Read More →
avatar for Rico Komenda

Rico Komenda

Senior Security Consultant

Rico is a senior product security engineer. His main security areas are in application security, cloud security, offensive security and AI security.

For him, general security intelligence in various aspects is a top priority. Today’s security world is constantly changing and you... Read More →
Thursday June 25, 2026 11:30am - 12:00pm CEST
Room -2.82 (Level 2)

11:30am CEST

Actionable Continuous SBOM Diffing
Thursday June 25, 2026 11:30am - 12:15pm CEST
SBOMs are known to be at the forefront of modern strategies to ensure supply chain security. However, there are two key problems that traditional SBOM workflows do not solve: working with components that do not have well-established identifiers and the introduction of malware in the supply chain.

This presents a significant gap between the expectations of SBOM adoption and the real value it can deliver. This talk will explore the concept of applying continuous SBOM diffing as part of the CI process. Rather than analyzing an SBOM for each release as a standalone artifact, we can compute diffs and take actions based on whether something has changed from the previous component release.

This approach makes all SBOM components actionable, even those that otherwise seem meaningless. For example, if an individual file that is not part of any library appears in an SBOM, legacy approaches make it difficult to reason about such a file. However, with continuous SBOM diffing, tracking changes in such components becomes meaningful and therefore actionable. For example, if a new component file appears with an unknown origin, we can sanitize the build and conduct additional investigations into what happened.

We will also demonstrate practical examples of how to achieve such actionable workflows using open-source tooling.
Speakers
avatar for Pavel Shukhman

Pavel Shukhman

CEO, Reliza

Pavel Shukhman is Co-Founder and CEO of Reliza, where he oversees the company's efforts in managing software and hardware releases, xBOMs, versioning and component identification. With over a decade of experience leading software teams, he has helped organizations implement DevOps... Read More →
Thursday June 25, 2026 11:30am - 12:15pm CEST
Hall K1 (Level -2)

11:30am CEST

The OWASP Top Ten 2025
Thursday June 25, 2026 11:30am - 12:15pm CEST
The OWASP Top Ten has been one of the most influential resources in application security for more than two decades — shaping training, security programs, and procurement decisions around the world. In this session, we’ll unveil the newest edition of the OWASP Top Ten Critical Risks to Web Applications, explain how it was built through community input and real-world data, and show what these changes mean for you.

We will cover all ten risks, focusing more time on the new and expanded items, as well as covering 3 ‘honourable mentions’ (#11, #12, and one that we do not have data to support). We’ll wrap up with practical guidance on how to use the Top Ten in your own programs (not as a compliance checklist, but as a strategic awareness tool).

Whether you’re an application security engineer, developer, or in management, this is your chance to get ahead of the curve and help shape the conversation: the writing is open for comment, and your feedback will make a difference.
Speakers
avatar for Tanya Janca

Tanya Janca

Security Trainer and Founder, She Hacks Purple & DevSec Station
Tanya Janca, known online as SheHacksPurple, is the best-selling author of Alice and Bob Learn Secure Coding and Alice and Bob Learn Application Security. She is the founder of DevSec Station, a modern learning platform and community built to help software developers master secure... Read More →
avatar for Torsten Gigler

Torsten Gigler

Internal IT Security Advisor, OWASP Volunteer

Torsten Gigler is an Internal IT Security Advisor in a large-scale enterprise >25 years (Application and ICT-Infrastructure-Security). He has been volunteering for OWASP since more than 13 years: Among other things, Torsten has been
* co-lead of the OWASP Top 10 project since 2017... Read More →
Thursday June 25, 2026 11:30am - 12:15pm CEST
Hall G1 (Level -2)

11:30am CEST

Authorization Is Where Your App Goes to Lie
Thursday June 25, 2026 11:30am - 12:15pm CEST
Your authorization logic probably lives in code, while the rationale behind it lives only in people’s heads.

That’s why authorization breaks in familiar ways: a missing check, an incorrect assumption, a copied snippet that made sense in one endpoint but was entirely wrong for another.

This talk is about making authorization logic visible earlier, during design, so engineers have something concrete to implement and reviewers have something concrete to critique. We’ll walk through a lightweight, design-time template that turns “who should be able to do what” into a structured artifact that can later be translated into policy-as-code, tested, and enforced consistently.

No new tools required; the focus is on a design-time step that fits cleanly into architecture reviews and threat modeling, and makes authorization easier to get right.
Speakers
avatar for Eden Yardeni

Eden Yardeni

Senior AppSec Engineer

Eden Yardeni works in application security, and contributes to OWASP projects including ASVS. She previously worked as a full-stack developer, but moved into application security when she heard there would be cookies.    linkedin.com/in/eden-yardeni/
... Read More →
Thursday June 25, 2026 11:30am - 12:15pm CEST
Hall D (Level -2)

11:30am CEST

Admission of Guilt: I Exploited a Parking System for a Year (And What It Taught Me About AppSec)
Thursday June 25, 2026 11:30am - 12:15pm CEST
If you’ve ever wanted to make AppSec relatable to your developers, your business stakeholders, etc…

If you want to hear an example of security flaws in a digital-physical system and how AppSec practices apply…

If you want to hear a funny story about my student-years shenanigans and maybe reminisce about your own…

Then this is the talk for you.

Security is often taught through theory, but some of the most powerful lessons come from lived experience even when that experience involves some very questionable ethics.

I will share with you the story of how I, a broke university student, reverse engineered and exploited a parking system to get free parking for a whole school year.

But this talk isn’t just a funny story, it’s about the lessons about AppSec that it taught me. And the realization that AppSec failures can have an impact on the physical world, and will even more so in the future as our physical environments become more intertwined with technology. The current example is minor and relatively harmless, but the implications of AppSec failures could have been far more serious in a different setting.

We’ll dissect this real-world exploit and how the vulnerabilities directly map to application security. Then each aspect will be mapped to the relevant CWEs, OWASP Top 10 categories and OWASP SAMM practices.

I will leave you with one activity that would have likely prevented the issues in the aforementioned system, and that I believe should be implemented in all organizations without exception.
Speakers
avatar for Dimitar Raichev

Dimitar Raichev

Software Security Engineer, Codific
I am a software security engineer at Codific, where my responsibilities include the design and development of SAMMY — a Secure SDLC management tool that supports numerous security and quality frameworks such as SAMM, SSDF, CSF, multiple ISO standards, etc.
In this capacity, I be... Read More →
Thursday June 25, 2026 11:30am - 12:15pm CEST
Hall K2 (Level -2)

11:30am CEST

Developing Effective Security Testing Skills with Objective Structured Assessments
Thursday June 25, 2026 11:30am - 12:15pm CEST
Technical skill development and evaluation for application (software) security testers remains underdeveloped. There is no widely adopted framework defining core competencies, proficiency levels, or objective assessment criteria. In the absence of such standards, the industry has defaulted to a fragmented ecosystem of private organizations offering training and certifications that insufficiently prepare the next generation of security testers for real-world testing.

This environment disproportionately rewards those who benefit from exceptional mentorship or possess the time, resources, and aptitude for intensive self-directed learning. The popular mantra “Try Harder” reflects this culture of self-made expertise, but it also serves as a substitute for formalized training models. Further, aspiring security professionals are left to

In contrast, more mature, life-critical disciplines that demand high levels of technical skill (such as aviation and surgery) are built upon standardized curricula, clearly defined skill progressions, and objective methods for evaluating competence. This is not by chance; over many decades, these (and related) fields have honed in how to achieve optimal outcomes through evidence-based training programs and practices.

In this talk, we will examine the past, present, and prospective future of application security tester training in comparison to more mature professions that demand a high level of technical skill. We will introduce a novel framework for evaluating technical skills and demonstrate its application in combination with a comprehensive AppSec curriculum. Both the assessment framework and the curriculum will be released to the open-source community at the time of presentation.
Speakers
avatar for Ryan Armstrong

Ryan Armstrong

AppSec Manager, Tester, and Teacher, Digital Boundary Group (DBG)
Ryan Armstrong is the Manager of Application Security Services at Digital Boundary Group (DBG). Ryan began with DBG as an application penetration tester and security consultant following completion of his PhD in Biomedical Engineering at Western University in 2016. With a passion... Read More →
Thursday June 25, 2026 11:30am - 12:15pm CEST
Hall G2 (Level -2)

12:15pm CEST

Lunch in Expo Hall
Thursday June 25, 2026 12:15pm - 1:15pm CEST
Thursday June 25, 2026 12:15pm - 1:15pm CEST
Expo Hall X1

12:15pm CEST

Cybersecurity Awareness Card Game : Let's Play
Thursday June 25, 2026 12:15pm - 2:15pm CEST
Learn the foundations of cybersecurity through a card game.

Participate in a tabletop, technology-free “capture the flag” experience where players gain practical insights into protecting digital information, responding to cyberattacks, and understanding core concepts such as the Cyber Kill Chain and the NIST Cybersecurity Framework.

For less experienced practitioners, the game builds a strong foundational mindset to support their ongoing cybersecurity journey. For more experienced practitioners, it offers a fresh, engaging way to communicate and teach core cybersecurity concepts. This makes cybersecurity more accessible and approachable for others.
Speakers
avatar for Michael Novack

Michael Novack

Solution Architect, Aiceberg

Michael is a product-minded security architect who loves turning tangled AI risks into clear, practical solutions. As Solution Architect at Aiceberg, he helps enterprises bake AI explainability and real-time monitoring straight into their systems, transforming real customer insights... Read More →
Thursday June 25, 2026 12:15pm - 2:15pm CEST
Room -2.92 (Level -2)

12:15pm CEST

Hunting Critical CVEs: A Hands-On, Pick-Your-Own Exploitation POD
Thursday June 25, 2026 12:15pm - 2:15pm CEST
New CVEs are released constantly, but in practice most teams never go beyond reading the advisory or relying on automated scanning. This POD is designed to change that by giving participants time and platform to hunt and exploit real-world critical CVEs.

Participants will have access to 10 hands-on challenges, each based on a real high or critical severity CVE commonly found in modern applications. Each challenge runs within a limited time window and can be attempted independently of the others.

For each challenge, participants can click a Deploy Lab option to spin up a temporary target system. The deployed application/system contains a previously undisclosed CVE to the participant, and the task is to identify the vulnerability, understand its behavior, and exploit it to demonstrate impact.

There is no fixed order or linear walkthrough. Participants are free to choose which CVEs to attempt, how deep they want to go with each one, and how long they want to stay in the activity. Some CVEs will allow participants to become admin, some might give a reverse shell. Labs are provisioned on demand using infrastructure-as-code, allowing participants to work independently on each challenge.

Some participants may focus on understanding a single CVE and reproducing it reliably. Others may try to exploit multiple issues or explore alternate attack paths. Both approaches are expected and encouraged.

The emphasis of this POD is on building practical intuition: how to read advisories critically, identify vulnerable attack surfaces, validate exploitability, and understand real impact beyond severity scores. The activity is fully hands-on, informal, and designed so people can join and leave at any time without falling behind.
Speakers
avatar for Abhinav Mishra

Abhinav Mishra

Founder, Cyber Security Guy

Abhinav Mishra is a cyber security practitioner with over 14 years of hands-on experience in vulnerability research, offensive security, and application security testing. He has carried out 1,000+ security reviews and penetration tests across web, mobile, API, and cloud-based systems... Read More →
Thursday June 25, 2026 12:15pm - 2:15pm CEST
Room -2.92 (Level -2)

12:15pm CEST

“2001: Agentic Odyssey” When threat modelling meets HAL, agentic AI, testing and safety engineering
Thursday June 25, 2026 12:15pm - 2:15pm CEST
“2001: Agentic Odyssey” is a hands-on, drop-in POD where we threat model the HAL 9000 system from 2001: A Space Odyssey as if it were a modern agentic AI system (LLM + tools + permissions + side effects). I bring a HAL DFD, and together we mark trust boundaries and do classic “what can go wrong?” threat identification. Participants then split into small groups to build attack-tree branches and translate them into Fault Tree Analysis (FTA) using AND/OR logic and minimal cut sets, including lightweight probability estimates to prioritise the most likely failure chains. We finish by turning those failure paths into automation-ready test ideas (fault injection, invariants, evidence), and optionally drafting a structured HAL threat model for submission to the OWASP Threat Model Library. Designed so anyone can contribute in 10-15 minutes, while advanced participants can go deep on FTA and prioritisation. Every stage is split into a way to enable drop-ins at any time.
Speakers
avatar for Petra Vukmirovic

Petra Vukmirovic

Head of Information Security at Numan and Fractional Head of Product, Devarmor

Petra is a technology enthusiast, leader and public speaker. A former emergency medicine doctor and competitive volleyball athlete, she thrives in challenging environments and loves creating order from chaos. Initially pursuing a medical career, Petra's passion for technology led... Read More →
Thursday June 25, 2026 12:15pm - 2:15pm CEST
Room -2.92 (Level -2)

1:15pm CEST

OWASP masCon - Recent Mobile App Security Incidents from Real-World Cases
Thursday June 25, 2026 1:15pm - 1:40pm CEST
This is a review of recent mobile app security incidents I work on day to day. We’ll walk through concrete cases from banking, food delivery, and e-commerce to break down how the breaches happened.

By the end, you’ll have a clearer sense of which security practices hold up in modern mobile apps and which ones fail in practice. You’ll also learn what commonly introduces vulnerabilities and where to find secure practices that actually work.
Speakers
avatar for Jan Seredynski

Jan Seredynski

Mobile Application Security Engineer, Guardsquare

Jan Seredynski is a mobile security professional with seven years of app development experience. He specializes in secure architectures and anti-tampering techniques. With a keen eye for uncovering vulnerabilities, Jan actively contributes to identifying and resolving CVEs and bugs... Read More →
Thursday June 25, 2026 1:15pm - 1:40pm CEST
Room -2.33 (Level -2)

1:15pm CEST

OWASP ModSecurity
Thursday June 25, 2026 1:15pm - 1:45pm CEST
As the cornerstone of open-source Web Application Firewalls, OWASP ModSecurity has protected the web for decades. However, maintaining its relevance in today’s evolving threat landscape requires more than just incremental updates—it requires a fundamental modernization. This presentation dives deep into the recent engineering efforts aimed at transforming the ModSecurity codebase into a leaner, more robust, and future-proof security engine.

Key highlights include:

* Code Quality & Refactoring: How we addressed technical debt and implemented stricter development standards.

* New Features: A look at the latest functionalities designed to counter sophisticated web attacks.

* Dependency Management: The rationale behind removing abandoned libraries and the technical challenges involved.

* The Path to a New Version: Why a major version update became necessary and what it means for the community.

* Beyond the Code: A brief look at the supporting ecosystem, including the complete renewal of the official website and documentation.

Attendees will gain a clear understanding of the architectural decisions shaping the next era of ModSecurity and what to expect from the upcoming releases.
Speakers
avatar for Ervin Hegedus

Ervin Hegedus

Project Co-Lead, OWASP ModSecurity
I'm 54, system and software engineer. ModSecurity contributor since 2017, Coreruleset developer since 2019, OWASP member since 2021 and project co-leader since 2024.
Thursday June 25, 2026 1:15pm - 1:45pm CEST
Room -2.82 (Level 2)

1:15pm CEST

One IDE to Rule Them All - Securing Your Supply Chain’s Weakest Link
Thursday June 25, 2026 1:15pm - 2:00pm CEST
Your API keys, business logic, database connections, sometimes even customer data and user information - might be all directly accessible from your IDE. This makes the IDE in one of the top spots for threat actors to try and break into.

Because the IDE has direct access to so much data, it makes your entire software supply chain to be as secure as a single extension, turning it to the weakest link in the chain.

It takes only one evil extension, one vulnerability or one prompt, to compromise your entire organization. We will explore how each of these attack scenarios can turn a developer’s workspace into a gateway for threat actors to exfiltrate customer data before a single line of code is even written.

We’ll dive deep into the IDEs architecture, starting from how IDE extensions are developed and their permissions stack, and how threat actors could manipulate extensions and IDE configurations to bypass security measures including the ability to exfiltrate valuable information from the developer’s IDE, then perform lateral movement directly after infection, and their ability to stay persistent even after being removed.
It's not just about threat actors hacking your IDE - they will go after everything in the organization that’s connected to it, and they will try to stay there as long as possible.

We’ll take a look at how threat actors could leverage vulnerabilities that lie in existing IDE extensions to execute remote code & exfiltrate information - transforming a developer's local machine into an under the radar backdoor of your organization. This includes our finding of multiple 0-day vulnerabilities in popular IDE extensions, and our research of weaponizing Chromium 1-day vulnerabilities on Cursor & Windsurf.

We’ll wrap up by giving the best practice recommendations for securing your IDE, avoiding evil extensions, adding company-wide policies and for approved extensions, and showing security teams how to integrate IDE security into their organization at scale.
Speakers
avatar for Moshe Siman Tov Bustan

Moshe Siman Tov Bustan

Security Research Team Leader, OX Security

Moshe is a Security Research Team Lead at OX Security, a company specializing in software supply chain security, and has worked in the security industry for 13 years. His work spans cloud security research, container security, memory forensics, and an in-depth understanding of programming... Read More →
avatar for Nir Zadok

Nir Zadok

OX Security

Nir Zadok is a rocket scientist who got a bit bored, so he moved to cybersecurity. Since then, as a Whitehat, he has managed to break dozens of mobile, web, and desktop applications. These days Nir is focused on software supply chain and innovative attack vector research via widely... Read More →
Thursday June 25, 2026 1:15pm - 2:00pm CEST
Hall K1 (Level -2)

1:15pm CEST

Retiring CVE Chasing: Defending Against Application Exploit Techniques
Thursday June 25, 2026 1:15pm - 2:00pm CEST
Vulnerability scanners are everywhere. CVE databases are growing exponentially. Yet vulnerability exploitation has surpassed phishing as the leading initial access vector. What's going wrong?

The problem isn’t a lack of vulnerability data – it’s that defenders are solving last year’s problems. While teams drown in CVE backlogs, attackers use AI to rapidly weaponize exploit techniques that work across vulnerability classes. OS command injection, deserialization, and path traversal aren't just individual CVEs – they're attack patterns that persist regardless of patch status.

This session introduces the Application Attack Matrix, the first comprehensive, community-driven framework mapping tactics, techniques, and procedures used against modern applications. Built by contributors from Mandiant, Microsoft, AWS, and Meta, it does for application security what MITRE ATT&CK did for enterprise defense.

You’ll learn how to shift from reactive CVE remediation to proactive technique-based defense, understand which exploit patterns dominate real-world attacks, and prioritize security controls that protect against entire attack classes, not just individual CVEs.
Speakers
avatar for Idan Elor

Idan Elor

Field CTO, Oligo Security,

Idan Elor is Field CTO at Oligo Security, where he partners with large enterprises to solve complex application and cloud security challenges. He most recently served as Director of Solution Engineering & Tech-Alliances at Apiiro, where he empowered enterprises to secure their software... Read More →
Thursday June 25, 2026 1:15pm - 2:00pm CEST
Hall G1 (Level -2)

1:15pm CEST

The Map of Artificial Treasures: What to Automate in Security - and Why?
Thursday June 25, 2026 1:15pm - 2:00pm CEST
With the rise of AI, especially large language models, it seems every security workflow will soon be automated or heavily supported by automation - from LLM-powered threat-intelligence enrichment or compliance mappings to AI-written threat models, codefixes and complete CISO roadmaps. But which processes will truly benefit, and in which cases will AI just increase the risk of adding cost and complexity? As security managers or leaders, how can we determine where to focus our efforts and investments upfront?

This talk presents a practical framework for evaluating the effectiveness of AI-driven automation in application security and related fields. First, we explore how to identify processes that are strong candidates for automation based on criteria such as repeatability, return on investment, and risk tolerance. Then, we map typical security processes to AI approaches, including large language models (LLMs), traditional machine learning, retrieval-augmented generation (RAG), and hybrid systems.

We will learn how these solutions are applied to critical security areas, such as vulnerability management, secure software development, threat detection, and compliance. We will explore an AI Capability Map, industry benchmarks, and real-world examples, such as the use of RAG-powered chatbots for security guidance and LLMs for compliance analysis. Our goal is to help you determine where AI would be a good fit for your organization and where you would likely see measurable value when applying it, so that you can make informed decisions. Also, we will examine the available data: In which areas of the industry is value already being recognized? We explore potential pitfalls, from fragile LLM implementations to poor risk modeling, and discuss how to avoid wasting resources.

Using industry data, real-world experience, and structured criteria, this talk provides security leaders and practitioners with more guidance in this rapidly evolving field.
Speakers
avatar for Michael Helwig

Michael Helwig

Senior Security Consultant, secureIO GmbH

I am security consultant and founder of secureIO GmbH, a consulting company that focuses on building application security programs and consulting clients from different industries on secure software development and compliance. I am focussing on DevSecOps, security testing, AI automation... Read More →
Thursday June 25, 2026 1:15pm - 2:00pm CEST
Hall D (Level -2)

1:15pm CEST

The Velocity Paradox: Why Slow is Smooth and Smooth is Fast in AppSec
Thursday June 25, 2026 1:15pm - 2:00pm CEST
Many AppSec programs fail because they try to run before they can walk. But in the world of ever changing attack surface, the truth is - Slow is smooth, smooth is fast, and 'smooth' is how we actually ship secure software at the speed of business.

This presentation outlines our multi-phased methodology for establishing an AppSec program. This approach emphasizes incremental, measurable, and sustainable goals throughout the journey. I will share ‘why, what and how’ of each major business-tailored adoption of frameworks like OWASP SAMM, Security Champions Guide and open source solutions. This talk will cover both cultural and technical aspects of the program, ranging from pushback from development to customization of language-specific-SAST policies to measuring the value with KPIs.

Application security practitioners will be able to use the strategy shared in this talk to build and scale the AppSec program aligned with their business goals.
Speakers
avatar for Pramod Rana

Pramod Rana

Sr. Manager - Application Security Assurance, Netskope

Pramod Rana is author of below open source projects:
1) Omniscient - LetsMapYourNetwork: a graph-based asset management framework
2) CICDGuard - Orchestrating visibility and security of CICD ecosystem
3) vPrioritizer - Art of Risk Prioritization: a risk prioritization framework

He ha... Read More →
Thursday June 25, 2026 1:15pm - 2:00pm CEST
Hall K2 (Level -2)

1:30pm CEST

Private Board Meeting
Thursday June 25, 2026 1:30pm - 2:30pm CEST

Thursday June 25, 2026 1:30pm - 2:30pm CEST
Room -2.15 (Level -2)

1:45pm CEST

OWASP masCon - Meet the New Frida Frontend on the Block
Thursday June 25, 2026 1:45pm - 2:10pm CEST
This talk introduces a new Frida frontend for macOS and iOS, designed as an interactive, persistent environment for exploring live processes.

It supports local and remote targets, long-lived sessions that survive crashes, and saved documents you can return to later. Built around this core model are a REPL, a code tracer, a powerful editor with completion and inline documentation, a persistent notebook, package management, and built-in collaboration.

We’ll walk through the motivation and architecture behind the frontend, and demo how a more stateful, GUI-driven approach opens up new workflows for dynamic instrumentation—without naming names (yet).
Speakers
avatar for Ole André Vadla Ravnås

Ole André Vadla Ravnås

Security Researche, NowSecure
Creator of Frida · Security Researcher at NowSecure
 @oleavr
no.linkedin.com/in/oleavr... Read More →
Thursday June 25, 2026 1:45pm - 2:10pm CEST

1:45pm CEST

OWASP KubeFIM: Detecting File Integrity Threats with eBPF & AI in Kubernetes
Thursday June 25, 2026 1:45pm - 2:15pm CEST
Introduction

File Integrity Monitoring is still a critical part of runtime security, but in Kubernetes it comes with new challenges. A single cluster can generate thousands of file system events per second across containers, nodes, and workloads. While eBPF allows us to safely and efficiently capture these events at the kernel level, interpreting them remains a hard problem.

OWASP KubeFIM AI is built to address this gap.

This session presents how KubeFIM AI sits on top of the OWASP KubeFIM Agent and analyzes kernel-level File Integrity Monitoring events collected via eBPF. Instead of treating each event as an alert, KubeFIM AI focuses on reasoning over events by correlating them with Kubernetes context such as pods, namespaces, images, and workload behavior.

Technical Details and Future Roadmap

The talk will cover:

1. Why raw eBPF-based FIM events are difficult to use at scale

2. What kernel-level file operations actually tell us during real attacks

3. How KubeFIM AI models file behavior over time instead of reacting to single events

4. Using Kubernetes context to distinguish expected behavior from suspicious activity

5. How AI can reduce noise, explain intent, and improve triage without hiding technical details

Rather than using a generic large language model, KubeFIM AI is designed around a domain-specific approach, trained to understand file system behavior, container lifecycles, and Kubernetes runtime patterns. The focus is on producing human-readable security insights.

The session will also discuss the roadmap for the project, including plans to improve detection accuracy, reduce alert fatigue, and assist security teams with faster incident response in cloud-native environments.

Explain why KubeFIM AI Is Not a SIEM Replacement

KubeFIM AI is not designed to replace a SIEM. It solves a different problem at a different layer of the stack.

SIEM platforms focus on collecting, storing, and correlating logs and alerts from many sources across an organization. They are built for visibility, compliance, long-term retention, and investigation across applications, cloud services, networks, and users.

KubeFIM AI operates much closer to the system. It works at the Linux kernel level using eBPF to observe file system behavior inside Kubernetes nodes and containers. Its primary role is to generate high-quality runtime security signals, not to aggregate logs or manage incidents.

The project intentionally avoids becoming a central log store or alerting platform. Instead, it focuses on understanding why a file change occurred, whether it matches expected workload behavior, and whether it may indicate a security issue. This analysis happens before data is sent anywhere else.

In practice,
Speakers
avatar for Abhijit Chatterjee

Abhijit Chatterjee

Co-Founder of Cyber Secure India (CSI), Cyber Secure India
Co-Founder of Cyber Secure India (CSI), a cybersecurity think tank focused on driving cybersecurity awareness, building a strong community through free education, sharing knowledge, and empowering young individuals to strengthen the digital infrastructure.
Thursday June 25, 2026 1:45pm - 2:15pm CEST
Room -2.82 (Level 2)

2:15pm CEST

OWASP masCon - Attacking ART
Thursday June 25, 2026 2:15pm - 2:40pm CEST
When analyzing the security of mobile applications, we often have to overcome local security controls to perform a thorough audit. This can include obtaining access to the application’s internal storage, disabling TLS pinning or forcing the application to use our interception proxy.
For many applications, this is straightforward. We can install the app on our rooted device, inject Frida and accomplish all of the above. However, this gets tricky when the application has implemented resiliency controls, known as Runtime Application Self Protection (RASP).

In this talk, I will zoom in on one lesser-known technique targeting the Android Runtime (ART): Manipulating ODEX/VDEX files. Any code implemented in Java/Kotlin can easily be manipulated without leaving any traces.
Speakers
avatar for Jeroen Beckers

Jeroen Beckers

Mobile Solution Lead, NVISO

I am the mobile solution lead at NVISO, where I am responsible for quality delivery, innovation and methodology for all mobile assessments. I am actively involved in the mobile security community, and I try to share my knowledge through open-source tools, blogposts, trainings and... Read More →
Thursday June 25, 2026 2:15pm - 2:40pm CEST

2:15pm CEST

Evil User Stories Modeling: Ensuring your User Stories in agile playing OWASP Cornucopia
Thursday June 25, 2026 2:15pm - 2:45pm CEST
In this session, I´ll show you how to sreamline the identification of security requirements associated with user stories in agile methodologies Using OWASP Cornucopia. Here you´ll se how to integrate User Stories with Cornucopia Cards and with ASVS as an security requirements and the defects that may arise if the security requirements are not properly considered or implemented. At the beginning ,we will explore two concepts I used to create this different way of playing OWASP Cornucopia and scaling it in agility, complementing the architecture-based threat model: Evil User Stories Modeling and Secure Scrum. All of this to apply the principle Security Just in Time for design a single product backlog that integrates security functionalities and controls into user stories avoiding the creation of a cybersecurity parallel backlog.
Speakers
avatar for Max Alejandro Gomez Sanchez Vergaray

Max Alejandro Gomez Sanchez Vergaray

Application Security Program Leader, AppSec & DevSecOps Consultant | Risk-driven Security for real-world products | S-SDLC, DevSecOps, Secure Design & Threat Modeling Trainer
I designed and led the application security program during the digital transformation process of one of the largest banks in Latin America, training more than 3,000 people in secure software development, specially in Secure Design using OWASP Cornucopia, another tools for threat modeling... Read More →
Thursday June 25, 2026 2:15pm - 2:45pm CEST
Room -2.82 (Level 2)

2:15pm CEST

From 0 to SLSA Level 3: A Practitioner's Field Guide
Thursday June 25, 2026 2:15pm - 3:00pm CEST
SLSA (Supply-chain Levels for Software Artifacts) promises to secure your software supply chain—but implementing it at enterprise scale is harder than the spec suggests. This talk shares our journey to SLSA Level 3, including the architectural decisions, performance trade-offs, and customer escalations that shaped our approach.

You'll learn:
- Provenance attestation architecture for multi-tenant CI/CD pipelines
- How to integrate SLSA verification without breaking existing workflows
- Real metrics: what SLSA costs in CI minutes and what attacks it actually catches
- Common implementation pitfalls and how to avoid them

Whether you're just starting your SLSA journey or stuck at Level 2, walk away with battle-tested patterns that work at scale.
Speakers
avatar for Mark Mishaev

Mark Mishaev

Senior Engineering Manager, Software Supply Chain Security, Gitlab

Senior Manager of Software Supply Chain Security at GitLab, leading 40+ engineers across Authentication, Authorization, Pipeline Security, and Compliance teams. He drives GitLab's SLSA implementation and security architecture for CI/CD pipelines serving millions of developers.
Wit... Read More →
Thursday June 25, 2026 2:15pm - 3:00pm CEST
Hall K1 (Level -2)

2:15pm CEST

Beyond the Chatbox: Implementing Guardrails for Autonomous Agents and LLMs Using Tools
Thursday June 25, 2026 2:15pm - 3:00pm CEST
As LLMs evolve from passive text generators to autonomous Agentic AI, the attack surface is shifting from simple prompt injection to Excessive Agency and Goal Hijacking. When we grant agents the power to execute shell commands, call sensitive APIs, or modify cloud infrastructure, we are essentially deploying "unattended administrators" into our environments.

This session moves past theoretical AI risks to provide a hands-on blueprint for securing autonomous actors.I will explore the newly released OWASP Top 10 for Agentic Applications 2026, focusing on critical vulnerabilities like ASI02 (Tool Misuse) and ASI05 (Unexpected Code Execution). Attendees will leave with a practical framework for implementing "Least-Agency" architecture, hardware-enforced sandboxing, and real-time intent validation.
Speakers
avatar for Rovindra Kumar

Rovindra Kumar

Security Architect, Google

Around 14+ years of experience in defining a Secure strategy, Architecture, and implementation of necessary security controls aligned with Security Services, including Cloud Security, Threat Protection, and implementation of cloud-native security controls. Providing thoughts leadership... Read More →
avatar for Mikesh Khanal

Mikesh Khanal

Security Engineer, Google

Mikesh is a senior cloud security engineer at Google with more than a decade experience, specializing in designing and implementing robust security architectures for organizations of all sizes. He is a recognized expert in cloud security design and architecture, compliance, and risk... Read More →
Thursday June 25, 2026 2:15pm - 3:00pm CEST
Hall G1 (Level -2)

2:15pm CEST

Human Rights Threat Modeling
Thursday June 25, 2026 2:15pm - 3:00pm CEST
Security and privacy threat models are fundamental tools in AppSec, but in modern systems, such as Identity and Access Management (IAM) and AI, they fail to intercept a growing class of threats: those that do not compromise the system but produce harm to people.

In this talk, we show why traditional threat models fail to capture these problems and how the limitation is not technical but cognitive. Human rights concepts are too abstract for many technicians, just as security was for developers before Threat Modeling became a facilitated and shared practice.

Through a concrete use case on IAM - extendable directly to AI systems - we present an approach that integrates Threat Modeling and harm modeling through a structured facilitation process, supported by cards and serious games.

The goal is not to turn developers into human rights experts but to make these threats visible, debatable, and mitigable using familiar AppSec tools.
Speakers
avatar for Giovanni Corti

Giovanni Corti

Cybersecurity Researcher, FBK

Cybersecurity professional specializing in cyber threat intelligence and in threat modeling for security, privacy, and user safety in high-risk systems.
  linkedin.com/in/g-corti
... Read More →
avatar for Simone Onofri

Simone Onofri

Security Lead, W3C

Simone is the W3C Security Lead. He has 20+ years of expertise in red/blue Teaming and Web security. He has spoken at OWASP, TEDx, and other events and authored Attacking and Exploiting Modern Web Applications.    linkedin.com/in/simoneonofri
... Read More →
avatar for Luca Lumini

Luca Lumini

Executive Security Advisor

Executive Security Advisor with more than 20 years of consulting experience focusing on corporate cyber strategy and security risk advisory, as Chief Security Officer Luca has been leading the Security Strategy and AI Innovation team for the AXA International Markets region. He is... Read More →
Thursday June 25, 2026 2:15pm - 3:00pm CEST
Hall D (Level -2)

2:15pm CEST

Taming the AppSec Data Deluge
Thursday June 25, 2026 2:15pm - 3:00pm CEST
Application Security engineers face a critical challenge: information overload from disparate security tools create “decision paralysis”. How do you balance design reviews, threat modeling, code reviews, monitoring alerts and managing your bug bounty program in an intentional instead of ad-hoc or reactive way?

This presentation demonstrates a novel approach using AI agents combined with Model Context Protocol (MCP) servers to automate work discovery and prioritize intelligently. Through practical examples, I'll show how Claude Code integrates with existing enterprise infrastructure—including issue tracking systems, content management platforms, Cloud Security Posture Management (CSPM) tools, and version control systems—to create an autonomous triage and prioritization engine.

You'll see how AI agents can pull together security data from all your different tools, figure out what actually matters based on your business context and threat intel, and spit out a prioritized to-do list that makes sense. I'll walk through real examples showing how this approach cuts down remediation times and helps you cover more ground with the same resources.
Speakers
avatar for Ben Sleek

Ben Sleek

Security Engineer, Proof

I’m an ex-Developer turned Application Security Engineer currently employed by Proof. After 10 years of building applications, I discovered breaking them could be just as fun.
  linkedin.com/in/ben-sleek-243aaa1/
... Read More →
Thursday June 25, 2026 2:15pm - 3:00pm CEST
Hall K2 (Level -2)

2:15pm CEST

This Build can Break You - Evil Runners and eBPF for Detection
Thursday June 25, 2026 2:15pm - 3:00pm CEST
CI/CD pipelines play an important role in modern software development. From a security perspective, this methodology contributes to more secure products, as automated checks can be applied on every run. Developers define tasks in a metadata file, and the system executes the defined jobs automatically. But what if the build chain itself becomes the security problem, allowing attackers to manipulate artifacts or take control of backend infrastructure? Let’s take a deep dive into “Poisoned Pipeline Execution” (OWASP CICD-SEC-4).

Builds are typically carried out in multiple steps using Runners—agents that pick up jobs and execute build instructions. These instructions, such as compiling a program or building a container image, are usually performed inside containers. Containers may provide isolation, but the effectiveness in terms of security strongly depends on the Runner’s configuration. Attackers can abuse Runners to execute arbitrary commands, leading to information disclosure or privilege escalation. While such attacks are well documented, effective detection mechanisms are often lacking.

Any viable detection method must be independent of the source code, language-agnostic, and container-friendly. The eBPF technology, which enables tracing of kernel-level activity, is well suited for this purpose. In this talk, we explore security vulnerabilities in CI Runners, how they become targets for attackers, and how malicious activities can be detected using eBPF.
Speakers
avatar for Reinhard Kugler

Reinhard Kugler

Principal Security Consultant, SBA Research

Reinhard’s focus relies on security testing of IT and industrial cyber-physical systems. Based on his prior experience in cyber defense, he works with companies to develop security capabilities and secure products. Reinhard is an experienced instructor and develops tailored security... Read More →
Thursday June 25, 2026 2:15pm - 3:00pm CEST
Hall G2 (Level -2)

2:30pm CEST

AI for Code Security in Modern Codebases
Thursday June 25, 2026 2:30pm - 4:30pm CEST
Modern codebases are large, fast-moving, and increasingly AI-assisted, making traditional code security approaches hard to scale. This hands-on POD explores how AI can augment secure coding and code review workflows—without replacing human judgment.

Participants will actively work through realistic code security scenarios drawn from modern APIs, cloud-native services, and GenAI-enabled components. Using guided exercises and optional AI prompts, attendees will identify vulnerabilities, reason about exploitability, and prioritize fixes mapped to OWASP Top 10 risks (including broken access control, injection, insecure design, and supply chain issues).

This is not a talk or a tool demo. Participants will do the work themselves through short, practical challenges. Beginners can follow structured steps, while experienced AppSec practitioners can dive into advanced issues such as logic flaws, authorization bypasses, insecure AI integrations, prompt injection risks in code, and unsafe use of AI-generated code.

The POD is drop-in friendly: participants can engage for a few minutes or stay longer to tackle deeper challenges. All techniques are applicable to real-world development environments, with or without AI tools.
Speakers
avatar for Rajnish Sharma

Rajnish Sharma

CEO, Precogs AI

Rajnish Sharma is the CEO and Founder of precogs.ai and a seasoned technology and security leader with experience in secure development, AI, and risk‑focused workflows. Previously, he served as Head of Investment Technology & AI at Allianz Global Investors, where he led strategic... Read More →
Thursday June 25, 2026 2:30pm - 4:30pm CEST
Room -2.92 (Level -2)

2:30pm CEST

Context & Cringe - Application Privacy through Play
Thursday June 25, 2026 2:30pm - 4:30pm CEST
Privacy risks are rarely obvious when looking at data, features, or apps in isolation. They emerge through changing context and are impacted by user perception.

In this POD, participants play Context & Cringe, a discussion-driven card game where players build fictional app scenarios using real-world data and features, then judge how those designs feel from a user’s perspective.

Rather than focusing on compliance or checklists, this session helps participants develop intuition for privacy impact by actively creating, debating, and experiencing cringey design choices. The result is a hands-on, low-barrier way to surface privacy risks that are often missed in a traditional security analysis - and a non-adversarial way to introduce uncomfortable topics into team discussions.
Speakers
avatar for Avi Douglen

Avi Douglen

Software Security Consultant, Bounce Security
Avi Douglen is the founder and CEO at Bounce Security, a boutique consultancy specializing in software security, where he spends a lot of time with development teams of all sizes. He helps them integrate security methodologies and products into their development processes, and often... Read More →
avatar for Kim Wuyts

Kim Wuyts

Manager Cyber & Privacy, PwC Belgium

Dr. Kim Wuyts is a leading privacy engineer with over 15 years of experience in security and privacy. Before joining PwC Belgium as Manager Cyber & Privacy, Kim was a senior researcher at KU Leuven where she led the development and extension of LINDDUN, a popular privacy threat modeling... Read More →
Thursday June 25, 2026 2:30pm - 4:30pm CEST
Room -2.92 (Level -2)

2:30pm CEST

DDoS your friends
Thursday June 25, 2026 2:30pm - 4:30pm CEST
interactive DDoS competition - player on player!

Each round players chooses to be an attacker or defender, matches up with an opponent and configures their attack/defense. The attack traffic is run (speed run), scores are given based on attack traffic stopped vs let through, and legit traffic blocked.

Players gain points each round, and there is an ongoing scoreboard. Leading attacker and defender configs are published too, so defenders and attackers can adapt.

The game is played on a webapp so can be accessed via mobile or laptop.
Speakers
avatar for Alex Marks-Bluth

Alex Marks-Bluth

Security Researcher, Akamai AppSec

Alex leads teams combining data science and security research in web application security, building security products for Akamai customers.

He enjoys watching and playing cricket, and every year he tries to learn Rust, for at least 2 weeks.
  linkedin.com/in/alex-marks-bluth-06a81... Read More →
Thursday June 25, 2026 2:30pm - 4:30pm CEST
Room -2.92 (Level -2)

2:30pm CEST

From Prompts to Payloads: Exploiting the AI-AppSec Intersection
Thursday June 25, 2026 2:30pm - 4:30pm CEST
LLMs are no longer standalone chatbots—they're increasingly embedded directly into application logic, with access to databases, APIs, file systems, and internal services. This architectural shift means the most dangerous LLM exploits don't just manipulate the model; they use the model as an attack vector to reach traditional AppSec targets. Prompt injection becomes a path to SQL injection. Conversational manipulation enables SSRF. The AI agent becomes an unwitting insider threat.

In this hands-on POD, participants will experience this convergence firsthand through a purpose-built vulnerable web application with an integrated AI agent. Through independent challenges, attendees will discover how attackers chain LLM manipulation with classic web exploitation—and why securing AI-integrated applications requires understanding both domains.

Challenges are designed for drop-in participation and cover multiple difficulty levels:
- Beginner-friendly: Basic prompt manipulation and information disclosure
- Intermediate: Chaining AI misuse with traditional web exploitation
- Advanced: Multi-stage attacks combining indirect prompt injection with server-side vulnerabilities

Each challenge is self-contained (under 15 minutes) with clear objectives, hints available on request, and facilitators ready to guide participants. Whether you're new to AI security or a seasoned pentester curious about LLM attack vectors, you'll walk away with practical techniques applicable to real-world assessments.

Challenges are mapped to multiple OWASP frameworks: the OWASP Top 10 for LLM Applications (covering risks like LLM01: Prompt Injection, LLM07: Insecure Plugin Design), the OWASP API Security Top 10, and the classic OWASP Web Application Top 10, helping participants connect new AI risks to established security knowledge.

No prior AI/ML experience required. Just curiosity and a laptop with a modern browser. All challenges run in-browser against our cloud-hosted lab environment.
Speakers
avatar for Dan Lisichkin

Dan Lisichkin

AI Security Researcher
Dan Lisichkin is the Cyber Security Researcher for Pillar Security, focusing on AI security, adversarial threats, and securing AI based systems. With over five years of experience in the cybersecurity and IT space, Dan has extensive knowledge in areas including malware analysis, reverse... Read More →
avatar for Ziv Karliner

Ziv Karliner

CTO, Pillar Security

Ziv Karliner is the Co-Founder and CTO of Pillar Security, where he works on securing AI-powered applications and agent-based systems. With over a decade of experience in cybersecurity, Ziv has led research and engineering efforts across application security, cloud security, financial... Read More →
avatar for Eilon Cohen

Eilon Cohen

AI Security Researcher, Pillar Security
That kid who took apart all his toys to see how they worked.
Currently breaking (and fixing) things in Pillar Security lab. Education spans from Mechanical Engineering and Robotics to Computer science, but a self-made security researcher and practitioner. Ex-IBM as a security engineer, securing multiple complex cloud and IT environments, now... Read More →
avatar for Ariel Fogel

Ariel Fogel

Founding Engineer & Researcher, Pillar Security

Ariel Fogel is a founding engineer & researcher at Pillar Security, where he hardens AI applications against real-world attacks and compliance risks. Over the past decade, he has built production systems in Ruby, TypeScript, Python, and SQL, shipping everything from full-stack web... Read More →
Thursday June 25, 2026 2:30pm - 4:30pm CEST
Room -2.92 (Level -2)

2:45pm CEST

OWASP masCon - Closure of conference by OWASP MAS team
Thursday June 25, 2026 2:45pm - 3:00pm CEST
Speakers
avatar for Carlos Holguera

Carlos Holguera

OWASP Mobile App Security (MAS): MASVS, MASWE and MASTG, NowSecure
Carlos is a principal mobile security research engineer working with NowSecure and one of the core project leaders and authors of the OWASP Mobile Security Testing Guide (MASTG) and OWASP Mobile Application Security Verification Standard (MASVS), the industry standard for mobile app... Read More →
avatar for Sven Schleier

Sven Schleier

Co-Founder, Bai7 GmbH
Sven is a co-founder of Bai7 GmbH in Austria, which is specialized in trainings and advisory. He has expertise in cloud security, offensive security engagements (Penetration Testing) and Application Security, notably in guiding software development teams across Mobile and Web Applications... Read More →
Thursday June 25, 2026 2:45pm - 3:00pm CEST

2:45pm CEST

OWASP MCP Top 10: When AI Agents Go Rogue, Securing the Model Context Protocol
Thursday June 25, 2026 2:45pm - 3:15pm CEST
The OWASP MCP Top 10 identifies the most critical security risks in MCP-enabled ecosystems. At the top of that list sits MCP Top 01: Untrusted Context Injection, a class of vulnerabilities where malicious inputs manipulate the context provided to AI agents, influencing their reasoning and actions.

Unlike traditional vulnerabilities that exploit deterministic code paths, MCP attacks target the decision-making layer of AI systems.

In this session, we explore how attackers can manipulate agent context, poison tool outputs, or inject instructions that cause AI systems to leak sensitive data, perform unintended actions, or bypass security controls.

Through real-world examples and architectural analysis, we will walk through the emerging MCP threat model and discuss defensive patterns organizations must adopt to secure the next generation of agentic AI systems.

The future of application security may depend on securing not just code but the context that AI thinks with.
Speakers
avatar for Vandana Verma Sehgal

Vandana Verma Sehgal

Vandana Verma is a Security Leader at Snyk, a podcast host, a Diversity and Inclusion Advocate, and an International speaker and influencer on a range of Information Security topics, including Application Security, DevSecOps, Cloud Security, and Security Careers.

From being the Chair of the OWASP Global Board of Directors to running various groups promoting security to organising conferences to even delivering keynote addresses at several of them, she is engaged continuously and proactively in making the global application security communit

... Read More →
Thursday June 25, 2026 2:45pm - 3:15pm CEST
Room -2.82 (Level 2)

3:00pm CEST

PM Break in Expo Hall
Thursday June 25, 2026 3:00pm - 3:30pm CEST
Thursday June 25, 2026 3:00pm - 3:30pm CEST
Expo Hall X1

3:15pm CEST

OWASP Leaders Meeting
Thursday June 25, 2026 3:15pm - 4:15pm CEST
Calling all OWASP Leaders!  Join OWASP Foundation staff to discuss updates to Chapters, Projects, and the Foundation as a whole.  This is your chance to receive updates and ask questions!
Thursday June 25, 2026 3:15pm - 4:15pm CEST
Room -2.15 (Level -2)

3:30pm CEST

OWASP AI Exchange Showcase
Thursday June 25, 2026 3:30pm - 4:00pm CEST
OWASP's flagship project, AI Exchange, is the world's AI security guide.

300+ pages of free, constantly-evolving, practical guidance on securing AI systems. It covers the fundamentals and represents the closest publicly available alignment of global expert consensus, feeding directly into the AI Act and ISO standards through a unique SDO partnership.
Speakers
avatar for Rob van der Veer

Rob van der Veer

Chief AI Officer, Software Improvement Group
Rob van der Veer is an AI pioneer with 33 years of AI experience, specializing in engineering, security and privacy. He is the lead author of the ISO/IEC 5338 standard on AI lifecycle, contributor to OWASP SAMM, co-founder of OWASP's digital bridge for security standards OpenCRE... Read More →
avatar for Aruneesh Salhotra

Aruneesh Salhotra

Fractional CISO, Author, Podcaster, Blogger, Fractional CISO, Author, Podcaster, Blogger
Aruneesh Salhotra is a seasoned technologist and servant leader, renowned for his extensive expertise across cybersecurity, DevSecOps, AI, Business Continuity, Audit, Sales. His impactful presence as an industry thought leader is underscored by his contributions as a speaker and panelist... Read More →
avatar for Behnaz Karimi

Behnaz Karimi

Co-Lead / Leader AI Red Teaming / Creator RAID-AI Framework / Senior cyber security engineer, OWASP AI Exchange
Behnaz Karimi is AI Security Researcher and the Creator of the RAID-AI Framework. She is also a Co-Author, Co-Lead, Leader AI Red Teaming at OWASP AI Exchange, where she actively contributes to advancing security practices for AI systems.

She has played a key role in OWASP initiatives, including contributing to the GenAI Red Teaming Guide for the OWASP Top 10 for Large Language Model Applications & Generative AI. Behnaz is a speaker at Global AppSec Barcelona and has spoken at OWASP Chapter Germany. She was also invited

... Read More →
Thursday June 25, 2026 3:30pm - 4:00pm CEST
Room -2.82 (Level 2)

3:30pm CEST

Pragmatic least-privilege for cloud and Kubernetes: applying good advice to real systems
Thursday June 25, 2026 3:30pm - 4:15pm CEST
Whichever public cloud you use, there are literally hundreds of assignable permissions — and while everyone quotes the ideal of “least privilege,” just when the deadline looms it becomes far too tempting to grant “just one more permission.” Before you know it, your developer teams and service accounts are swimming in high privileges.

In this session we’ll start from the basics of structured permission management, then go deeper — all the way to time-limited access, rule-based privileged-access workflows, and on-demand role elevation. We won’t rehash each cloud provider’s security guide; instead, we’ll deliver pragmatic, maintainable, and flexible guidelines that balance solid permission hygiene with the realities of tight deadlines.

This talk is targeted at security engineers, cloud engineers or anyone just looking for a point to start organizing and structuring their permission approach.
Speakers
avatar for Mark Vinkovits

Mark Vinkovits

Chief Information Security Officer, XUND Solutions

Mark worked as software, security, and privacy engineer over the past decade. Since his research in user centered computing, he has been arguing that human behavior, beliefs, and motivations cannot be excluded from the design of any solution, including any SDLC that should be livable... Read More →
Thursday June 25, 2026 3:30pm - 4:15pm CEST
Hall K1 (Level -2)

3:30pm CEST

The Devil is in the Defaults - what to do about XSS
Thursday June 25, 2026 3:30pm - 4:15pm CEST
This session is about latest defenses against Cross-Site Scritping (XSS), the most prevalent security issue of all times. We will showcase typical XSS bugs and how they can be avoided. We will also explain why previous mechanisms fall short of protecting web sites at scale and why we believe Trusted Types and the Sanitizer API can help closing this gap.
The presentation will also give hands-on advice to enable security and development teams adopting these new protections. We will close with a bit on security considerations and remainign risks.
Speakers
avatar for Frederik Braun

Frederik Braun

Security Engineer, Mozilla Firefox Berlin

Frederik Braun builds security for the web and for Mozilla Firefox from Berlin. As a contributor to standards, Frederik is also improving the web platform by bringing security into the defaults with specifications like the Sanitizer API and Subresource Integrity. Before Mozilla, Frederik... Read More →
Thursday June 25, 2026 3:30pm - 4:15pm CEST
Hall G1 (Level -2)

3:30pm CEST

AI and the Threat Modeling Manifesto: Conflicts, Failure Modes, and Better Patterns
Thursday June 25, 2026 3:30pm - 4:15pm CEST
AI is becoming increasingly embedded in threat modeling processes. Some organizations now claim that threat modeling can be performed entirely by AI. This appears to be a natural progression, given the growing use of AI in software development itself.

Before the current wave of AI adoption, the Threat Modeling Manifesto (TMM) was developed, drawing inspiration from the Agile Manifesto. It distilled years of practitioner experience in application security into a short, actionable document. The TMM emphasizes values such as a culture of finding and fixing design issues, people and collaboration over tools, and a journey of understanding rather than a static security snapshot.

This talk examines how AI-assisted threat modeling can diverge from these values through five recurring anti-patterns. These include treating AI as the hero threat modeler, de-emphasizing human collaboration and input, prioritizing snapshots over the journey of understanding, delegating creativity to AI, and favoring exhaustive enumeration over deliberate discussion.

The session then explores three silent failure modes that frequently emerge in the presence of these anti-patterns: hallucination, automation bias, and the illusion of completeness. Together, they produce threat models that appear finished and authoritative, while concealing subtle errors, weakening shared understanding and ownership, and failing to create the motivation needed for people to act.

Finally, the talk synthesizes emerging best practices observed across real-world AppSec teams. These include using AI as a facilitator rather than an authority, designing explicitly for disagreement and multiple viewpoints, and structuring processes that increase meaningful human participation and understanding.

Attendees will leave with a practical framework for adopting AI-assisted threat modeling that helps teams avoid silent failures, preserve human judgment and collaboration, and use AI to generate output that gets understood and acted upon.

Speakers
avatar for Vikramaditya Narayan

Vikramaditya Narayan

Creator of The Precogly Open Source Threat Modeling Platform
Vikramaditya Narayan is the creator of Precogly, an open-source, enterprise-grade threat modeling platform built for compliance-aware security teams. Previously, he designed the prototype for a YC-funded AI governance platform. Vikramaditya leads the Bangalore chapter of Threat Modeling... Read More →
Thursday June 25, 2026 3:30pm - 4:15pm CEST
Hall D (Level -2)

3:30pm CEST

Agile Development and IT Security – From Conflict to Collaboration
Thursday June 25, 2026 3:30pm - 4:15pm CEST
Agile software development and IT security share the goal of delivering reliable, robust software, yet they often collide in practice. Security validation is still frequently deferred to the end of the development lifecycle, producing findings too late to be effectively addressed. Under delivery pressure, this can lead to defensive reactions toward security activities and tools. This talk explores why security issues are detected yet may not be processed soon and shows how integrating security early and continuously can transform friction into collaboration.
Speakers
avatar for Juliane Reimann

Juliane Reimann

Founder and Security Community Expert, Full Circle Security
Juliane Reimann works as cyber security consultant for large companies since 2019 with focus on DevSecOps and Community Building. Her expertise includes building security communities of software developers and establishing developer centric communication about secure software development... Read More →
avatar for Elisa Erbe

Elisa Erbe

Project Manager, FullCyrcle Security

Elisa Erbe has been working as a project manager in digital web solutions and cybersecurity companies since 2021, with a focus on agile planning and processes. Before transitioning into project management in the IT sector, she gained experience in teaching, research, and organizational... Read More →
Thursday June 25, 2026 3:30pm - 4:15pm CEST
Hall K2 (Level -2)

3:30pm CEST

Boiling the Ocean for Signal: Lessons from High-Volume OSS Malware Detection
Thursday June 25, 2026 3:30pm - 4:15pm CEST
Malicious open source packages are on the rise, targeting more and more ecosystems. And while open source maintainers and users struggle to secure the immense attack surface of today’s software development practice, attackers continue to evolve their techniques.

This talk presents lessons learned from developing and operating an end-to-end malware detection pipeline in an enterprise setup that automatically scans tens of thousands packages a day, and is followed by human review of reported malware. It provides an overview about and fundamental design decisions, starting from a suitable classification scheme and the selection of meaningful signals with a low signal-to-noise ratio, to the compilation of Indicators of Compromise and the final reporting of confirmed malicious packages to the respective registries and third-party databases like OSV. The individual sections and learnings will be motivated and illustrated through real-world samples as well as descriptive statistics obtained from our system.

Session attendees will learn about:
- Latest open source malware trends,
- common evasion techniques used by attackers, from encoding techniques, code transformations and payload splitting to prompt instructions aiming to sabotage LLM-based detectors,
- the shortcomings of current malware datasets in regard to supporting developers in the evaluation of malware scanners, e.g., the lack of accompanying metadata and qualitative descriptions,
- the importance and complementarity of code and metadata-based detection signals,
- requirements and design decisions for an end-to-end OSS malware scanner, e.g., the realization that a binary classification benign/malicious is not colorful enough for the breadth of software distributed through OSS registries like npm or PyPI, and
- descriptive statistics obtained from our system, showing the prevalence of techniques used in the wild, e.g., the prevalence of different malware triggers and targeted platforms.

As such, the presentation targets both open source users interested in the latest malware trends and safeguards, as well as builders wanting to create an end-to-end OSS scan pipeline, e.g., because their ecosystem is already targeted by attackers but not yet or not sufficiently covered by state-of-the-art scanners.
Speakers
avatar for Henrik Plate

Henrik Plate

Security Researcher, Endor Labs

In his current position, Henrik aims at improving the security of today’s software supply chains, and in particular the secure consumption of open source. He formerly worked for SAP Security Research, where he led the focus topic "open source security" starting in 2014. He co-authored... Read More →
Thursday June 25, 2026 3:30pm - 4:15pm CEST
Hall G2 (Level -2)

4:15pm CEST

Networking Reception in Expo Hall and OWASP Jeopardy!
Thursday June 25, 2026 4:15pm - 6:45pm CEST
Come mingle with attendees and exhibitors AND have the chance to win prizes during OWASP Jeopardy with Jerry Hoff!
Thursday June 25, 2026 4:15pm - 6:45pm CEST
Expo Hall X1
 
Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.
Filtered by Date -