Loading…
Audience: Intermediate clear filter
Monday, June 22
 

9:00am CEST

3-Day Training: Full-Stack Pentesting Laboratory: 100% Hands-On + Lifetime LAB Access
Monday June 22, 2026 9:00am - 5:00pm CEST
3-Day Training:June 22-24, 2026
Level: Intermediate
Trainer: Dawid Czagan


To register, please purchase your training ticket here. Training and conference are two separate ticket purchases.

Modern IT systems are increasingly complex, making full-stack expertise more essential than ever. That's why diving into full-stack pentesting is crucial—you will gain the skills needed to master modern attack vectors and implement effective defensive countermeasures.

For each attack, vulnerability and technique presented in this training, there is a lab exercise to help you develop your skills step by step. What's more, when the training is over, you can take the complete lab environment home to hack again at your own pace.

I found security bugs in many companies including Google, Yahoo, Mozilla, Twitter and in this training I'll share my experience with you.

Key Learning Objectives
After completing this training, you will have learned about:

- Hacking cloud applications
- API hacking tips & tricks
- Data exfiltration techniques
- OSINT asset discovery tools
- Tricky user impersonation
- Bypassing protection mechanisms
- CLI hacking scripts
- Interesting XSS attacks
- Server-side template injection
- Hacking with Google & GitHub search engines
- Automated SQL injection detection and exploitation
- File read & file upload attacks
- Password cracking in a smart way
- Hacking Git repos
- XML attacks
- NoSQL injection
- HTTP parameter pollution
- Web cache deception attack
- Hacking with wrappers
- Finding metadata with sensitive information
- Hijacking NTLM hashes
- Automated detection of JavaScript libraries with known vulnerabilities
- Extracting passwords
- Hacking Electron applications
- Establishing reverse shell connections
- RCE attacks
- XSS polyglot
- and more …

What Students Will Receive
Students will be handed in a VMware image with a specially prepared lab environment to play with all attacks, vulnerabilities and techniques presented in this training. When the training is over, students can take the complete lab environment home (after signing a non-disclosure agreement) to hack again at their own pace.

Special Bonus
The ticket price includes FREE access to my 6 online courses:

- Fuzzing with Burp Suite Intruder
- Exploiting Race Conditions with OWASP ZAP
- Case Studies of Award-Winning XSS Attacks: Part 1
- Case Studies of Award-Winning XSS Attacks: Part 2
- How Hackers Find SQL Injections in Minutes with Sqlmap
- Web Application Security Testing with Google Hacking

What Students Say About My Trainings
References are attached to my LinkedIn profile (https://www.linkedin.com/in/dawid-czagan-85ba3666/). They can also be found here: https://silesiasecuritylab.com/services/training/#opinions – training participants from companies such as Oracle, Adobe, ESET, ING, Red Hat, Trend Micro, Philips, government sector... 

What Students Should Know
To get the most of this training intermediate knowledge of web application security is needed. Students should have experience in using a proxy, such as Burp Suite Proxy or Zed Attack Proxy (ZAP), to analyze or modify the traffic.

What Students Should Bring

Students will need a laptop with 64-bit operating system, at least 8 GB RAM, 35 GB free hard drive space, administrative access, ability to turn off AV/firewall and VMware Player/Fusion installed (64-bit version). Prior to the training, make sure there are no problems with running x86_64 VMs.

Additional notes

This new 3-day training was sold out at top security conferences e.g. DEF CON (Las Vegas), Hack In Paris (Paris).

This is a 100% hands-on training: for each attack, vulnerability and technique presented in this training, there is a lab exercise to help students develop their skills step by step.

Speakers
avatar for Dawid Czagan

Dawid Czagan

Founder and CEO, Silesia Security Lab
Dawid Czagan is an internationally recognized security researcher and trainer. He is listed among top hackers at HackerOne. Dawid Czagan has found security bugs in Apple, Google, Mozilla, Microsoft and many others.

Due to the severity of many bugs, he received numerous awards for his findings. Dawid Czagan shares his security experience in his hands-on trainings. He delivered trainings at key industry conferences such as DEF CON (Las Vegas), OWASP 2025 Global AppSec EU (Barcelona), Hack In The... Read More →
Monday June 22, 2026 9:00am - 5:00pm CEST

9:00am CEST

3-Day Training:AI Whiteboard Hacking aka Hands-on Threat Modeling Training
Monday June 22, 2026 9:00am - 5:00pm CEST
To register, please purchase your training ticket here. Training and conference are two separate ticket purchases.

3-Day Training: June 22-24, 2026
Level: Beginner
Trainer: Sebastien Deleersnyder

Download the complete training outline: AI Whiteboard Hacking Training Details

Testimonial: "After years evaluating security trainings at Black Hat, including Toreon's Whiteboard Hacking sessions, I can say this AI threat modeling course stands out. The hands-on approach and flow are exceptional - it's a must-attend."
- Daniel Cuthbert, Global Head of Cyber Security Research, Black Hat Review Board Member

In today's rapidly evolving AI landscape, security threats like prompt injection and data poisoning pose significant risks to AI systems. Our 3-day AI Whiteboard Hacking training equips you with practical skills to identify, assess, and mitigate AI-specific security threats using our proven DICE methodology. Through hands-on exercises and real-world scenarios, you'll learn to build secure AI systems while ensuring compliance with regulations like the EU AI Act.

The training concludes with an engaging red team/blue team wargame where you'll put theory into practice by attacking and defending a rogue AI research assistant. Upon completion, you'll earn the AI Threat Modeling Practitioner Certificate and gain access to a year-long subscription featuring quarterly masterclasses, expert Q&A sessions, and continuously updated resources.

Led by Sebastien Deleersnyder, co-founder and CTO of Toreon, and Black Hat trainer, this training combines technical expertise with practical insights gained from real-world projects across government, finance, healthcare, and technology sectors.

Quick Overview:
·       Target Audience: AI Engineers, Software Engineers, Solution Architects, Security Professionals
·       Prerequisites: Basic understanding of AI concepts (pre-training materials provided)
·       Certification: AI Threat Modeling Practitioner Certificate
·       Bonus: 1-year AI Threat Modeling Subscription included

Our lineup of the hands-on exercises from the training that let you put AI security concepts into practice:

Day 1: Foundations & Methodology
·       "AI Security Headlines from the Future" - Explore potential security scenarios
·       "Diagramming the AI Assistant Infrastructure" - Map out real AI system components
Speakers
avatar for Sebastien Deelersnyder

Sebastien Deelersnyder

Co-Founder and CEO, Toreon
Sebastien Deleersnyder, also known as Seba, is a highly accomplished individual in the field of cybersecurity. He is the CTO and co-founder of Toreon, as well as the COO and lead threat modeling trainer of Data Protection Institute. Seba holds a Master's degree in Software Engineering... Read More →
Monday June 22, 2026 9:00am - 5:00pm CEST
 
Tuesday, June 23
 

9:00am CEST

2-Day Training: Adam Shostack's Threat Modeling Intensive
Tuesday June 23, 2026 9:00am - 5:00pm CEST
2-Day Training: June 23-24, 2026
Level: Intermediate
Trainer: Adam Shostack

To register, please purchase your training ticket here. Training and conference are two separate ticket purchases.

This hands-on, interactive class will focus on learning to threat model by executing each of the steps. Students will start with a guided threat modeling exercise, and we'll then iterate and break down the skills they're learning in more depth. We'll progressing through the Four Questions of Threat Modeling: what are we working on, what can go wrong, what are we going to do about it and did we do a good job. This is capped off with an end-to-end exercise that brings the skills together.
Speakers
avatar for Adam Shostack

Adam Shostack

Founder, Shostack & Associates
Adam Shostack is a leading expert on threat modeling. He has decades of experience delivering security. His experience ranges across the business world from founding startups to nearly a decade at Microsoft. His accomplishments include:  Helped create the CVE. Now an Emeritus member... Read More →
Tuesday June 23, 2026 9:00am - 5:00pm CEST

9:00am CEST

2-Day Training: AI SecureOps: Attacking & Defending AI Applications and Agents (Hybrid)
Tuesday June 23, 2026 9:00am - 5:00pm CEST
2-Day Training: June 23-24, 2026
Level: Intermediate
Trainer:Abhinav Singh

You may attend this training course either in person or virutally

To register, please purchase your training ticket here. Training and conference are two separate ticket purchases.

Can prompt injections lead to complete infrastructure takeovers? Could AI agents be exploited to compromise backend services? Can jailbreaks create false crisis alerts in security systems? In multi-agent systems, what if an attacker takes over an agent’s goals, turning other agents into coordinated threats? This immersive, CTF-styled training in AI and LLM security dives into these pressing questions. Engage in realistic attack and defense scenarios focused on real-world threats, from prompt injection and remote code execution to backend compromise. Tackle hands-on challenges with actual AI applications & agentic systems to understand vulnerabilities and develop robust defenses. You’ll learn how to create a comprehensive security pipeline, mastering AI red and blue team strategies, building resilient defenses for AI apps & agents, and handling incident response for AI-based threats. Additionally, implement a Responsible AI (RAI) program to enforce ethical AI standards across enterprise services, fortifying your organization’s AI security foundation.

By the end of this training, you will be able to:

- Exploit vulnerabilities in AI applications to achieve code and command execution, uncovering scenarios such as instruction injection, agent control bypass, remote code execution for infrastructure takeover as well as chaining multiple agents for goal hijacking.
- Conduct AI red-teaming using adversary simulation, OWASP LLM Top 10, and MITRE ATLAS frameworks, while applying AI security and ethical principles in real-world scenarios.
- Execute and defend against adversarial attacks, including prompt injection, data poisoning, jailbreaks and agentic attacks.
- Perform advanced AI red and blue teaming through multi-agent auto-prompting attacks, implementing a 3-way autonomous system consisting of attack, defend and judge models.
- Develop LLM security scanners to detect and protect against injections, jailbreaks, manipulations, and risky behaviors, as well as defending LLMs with LLMs.
- Build and deploy enterprise-grade LLM defenses, including custom guardrails for input/output protection, security benchmarking, and penetration testing of LLM agents.
- Establish a comprehensive LLM SecOps process to secure the supply chain from adversarial attacks and create a robust threat model for enterprise applications.
- Implement an incident response and risk management plan for enterprises developing or using GenAI services.
Speakers
avatar for Abhinav Singh

Abhinav Singh

Cyber Security Research in AI,Cloud & Data, Midfield Security
Abhinav Singh is an esteemed cybersecurity leader and researcher with over a decade of experience working with global technology leaders, startups, financial institutions, and as an independent trainer and consultant. He is the author of the widely acclaimed "Metasploit Penetration... Read More →
Tuesday June 23, 2026 9:00am - 5:00pm CEST

9:00am CEST

2-Day Training: Repeatable, Scalable and Valuable Code Security Scanning
Tuesday June 23, 2026 9:00am - 5:00pm CEST
2-Day Training: June 23-24, 2026
Level: Intermediate
Trainer:Josh Grossman

To register, please purchase your training ticket here. Training and conference are two separate ticket purchases.

To learn more about this training, please visit the link here

Suddenly anyone and everyone in your organization can use AI assistants to write code. Meanwhile, your actual developers are putting out 100x their previous output , with “varying” levels of quality. So how are you going to secure code at this scale?

This course is designed to be a deep dive into state-of-the-art techniques for validating code security within an organization’s codebase. The course has a strong emphasis on how AI-driven analysis can drive this forward whilst also clearly highlighting where standard, deterministic techniques (albeit incorporating AI acceleration) will be more effective.

During the course, you will learn how to combine these techniques, in a scalable and repeatable way, based on our experience doing just this with real organizations and real teams and with a focus on the current state of the art in this fast-moving area.

This course goes beyond the scope of standard application security knowledge and is designed to make you a specialist in this area. Having spent several years perfecting this process, we are excited to impart the lessons we have learnt!

The course is structured as follows:

* Overview – setting out the basic details of what we will be talking about in terms of code scanning and SAST.
* Key techniques – Discuss the different techniques which can be used for this including generic “off the shelf” SAST, deterministic custom scanning rules, and LLM powered custom AI prompts
* Technique comparison - Advantages and disadvantages of each technique based on our in-depth experience with each and which technique you will want to use in different situations, to avoid wasting time trying to use a technique in an inappropriate use case.
* Organizational process – How to get these processes built into an organization’s existing software lifecycle
* Generic SAST – Using “off the shelf” rules effectively to catch “low hanging fruit” and avoid reinventing the wheel.
* Custom SAST – Introduce custom rule languages (e.g., Semgrep, CodeQL), writing rules from scratch, and scaling analysis across a codebase.
* Basic AI Code Security Scanning – Overview of AI-based scanning, platforms, principles, and initial single-shot prompts.
* Complex AI Code Security Scanning – AI-driven techniques for code security, including using AI to review and triage findings and creating multi-stage rules that combine deterministic rules
Speakers
avatar for Josh Grossman

Josh Grossman

CTO, Bounce Security
Josh Grossman has worked as a consultant in IT and Application Security and Risk for 15 years now, as well as a Software Developer. This has given him an in-depth understanding of how to manage the balance between business needs, developer needs and security needs which goes into... Read More →
Tuesday June 23, 2026 9:00am - 5:00pm CEST

9:00am CEST

2-Day Training: The Mobile Playbook - A guide for iOS and Android App Security (Hybrid)
Tuesday June 23, 2026 9:00am - 5:00pm CEST
2-Day Training: June 23-24, 2026
Level: Intermediate
Trainer:Sven Schleier

You may attend this training course in person or virtually.

To register, please purchase your training ticket here. Training and conference are two separate ticket purchases.

This two-day, hands-on course is designed to teach penetration testers, developers, and engineers how to analyse Android and iOS applications for security vulnerabilities. The course covers the different phases of testing, including dynamic testing, static analysis, reverse engineering, and software composition analysis (SCA). We will also explore how you can use the Model Context Protocol (MCP) to automate some of these workflows and leverage its strengths.

The course is based on the OWASP Mobile Application Security Testing Guide (MASTG) and taught by one of the project co-leaders. This comprehensive, open-source mobile security testing book covers both iOS and Android, providing a methodology and detailed technical test cases to ensure completeness and utilizes the latest attack techniques against mobile applications. This course provides hands-on experience with open-source tools and advanced methodologies, guiding you through real-world scenarios.

Detailed outline

On the first day, we will start with an introduction to the OWASP MASVS and MASTG projects, including the latest updates. Then, we will dive into the Android platform and its security architecture. Students will no longer be required to bring their own Android device; instead, each student will be provided with a cloud-based, virtualised Android device from Corellium.

Topics include:

- Intercepting network traffic of an Android App in various scenarios, including intercepting traffic that is not HTTP.
- Scanning for secrets in an APK.
- Reverse engineering a Kotlin app and identifying and exploiting a real-world deep link vulnerability through manual source code review.
- Static Scanning of decompiled Kotlin source code by using MCP workflows with semgrep and radare2, identifying vulnerabilities and eliminating false positives.
- Frida crash course to get started with dynamic instrumentation on Android apps by using MCP workflows.
- Use dynamic instrumentation with Frida to bypass client-side security controls such as root detection mechanisms.
- We will close day 1 with a Capture the Flag (CTF) by attacking several apps, including a real world app and overcome it's protection mechanisms.

Day 2 focuses on iOS. We will begin the day by exploring the OWASP MASWE and creating an iOS test environment using Corellium and dive into several topics, including:

- Introduction into iOS Security fundamentals
- Intercepting network traffic of an iOS App in various scenarios, including intercepting traffic from apps written in mobile app frameworks such as Google's Flutter.
- How to retrieve an IPA, execute static scanning of an IPA and identifying vulnerabilities and eliminating false positives.
- Software Composition Analysis (SCA) for iOS by using SBOM's and scanning 3rd party libraries and SDKs in mobile package managers for known vulnerabilities and planning mitigation strategies.
- Frida crash course to get started with dynamic instrumentation for iOS applications and utilsing MCP workflows.
- Testing methodology with a non-jailbroken (jailed) device by repackaging an IPA with the Frida gadget.
- Analyse the storage of an iOS app and understand the various options on how (files, databases, logs etc.) and where files can be stored.
- Using Frida to bypass runtime instrumentation of iOS applications, like anti-Jailbreaking Mechanisms.

We'll wrap up the final day with a CTF and participants can win a prize!

Whether you are a beginner who wants to learn mobile app testing from the ground up, or an experienced pentester or developer or engineer who wants to improve your existing skills to perform more advanced attack techniques, this training will help you achieve your goals.

The course consists of many different hands-on labs developed by the instructor or using real world apps that are part of bug bounty platforms.

Upon successfully completing this course, students will have a better understanding of how to test for vulnerabilities in mobile applications, how to recommend appropriate mitigation techniques to developers and how to perform consistent and efficient testing using MCP (Model Context Protocol) workflows.
Speakers
avatar for Sven Schleier

Sven Schleier

Co-Founder, Bai7 GmbH
Sven is a co-founder of Bai7 GmbH in Austria, which is specialized in trainings and advisory. He has expertise in cloud security, offensive security engagements (Penetration Testing) and Application Security, notably in guiding software development teams across Mobile and Web Applications... Read More →
Tuesday June 23, 2026 9:00am - 5:00pm CEST

9:00am CEST

3-Day Training: Full-Stack Pentesting Laboratory: 100% Hands-On + Lifetime LAB Access
Tuesday June 23, 2026 9:00am - 5:00pm CEST
3-Day Training:June 22-24, 2026
Level: Intermediate
Trainer: Dawid Czagan


To register, please purchase your training ticket here. Training and conference are two separate ticket purchases.

Modern IT systems are increasingly complex, making full-stack expertise more essential than ever. That's why diving into full-stack pentesting is crucial—you will gain the skills needed to master modern attack vectors and implement effective defensive countermeasures.

For each attack, vulnerability and technique presented in this training, there is a lab exercise to help you develop your skills step by step. What's more, when the training is over, you can take the complete lab environment home to hack again at your own pace.

I found security bugs in many companies including Google, Yahoo, Mozilla, Twitter and in this training I'll share my experience with you.

Key Learning Objectives
After completing this training, you will have learned about:

- Hacking cloud applications
- API hacking tips & tricks
- Data exfiltration techniques
- OSINT asset discovery tools
- Tricky user impersonation
- Bypassing protection mechanisms
- CLI hacking scripts
- Interesting XSS attacks
- Server-side template injection
- Hacking with Google & GitHub search engines
- Automated SQL injection detection and exploitation
- File read & file upload attacks
- Password cracking in a smart way
- Hacking Git repos
- XML attacks
- NoSQL injection
- HTTP parameter pollution
- Web cache deception attack
- Hacking with wrappers
- Finding metadata with sensitive information
- Hijacking NTLM hashes
- Automated detection of JavaScript libraries with known vulnerabilities
- Extracting passwords
- Hacking Electron applications
- Establishing reverse shell connections
- RCE attacks
- XSS polyglot
- and more …

What Students Will Receive
Students will be handed in a VMware image with a specially prepared lab environment to play with all attacks, vulnerabilities and techniques presented in this training. When the training is over, students can take the complete lab environment home (after signing a non-disclosure agreement) to hack again at their own pace.

Special Bonus
The ticket price includes FREE access to my 6 online courses:

- Fuzzing with Burp Suite Intruder
- Exploiting Race Conditions with OWASP ZAP
- Case Studies of Award-Winning XSS Attacks: Part 1
- Case Studies of Award-Winning XSS Attacks: Part 2
- How Hackers Find SQL Injections in Minutes with Sqlmap
- Web Application Security Testing with Google Hacking

What Students Say About My Trainings
References are attached to my LinkedIn profile (https://www.linkedin.com/in/dawid-czagan-85ba3666/). They can also be found here: https://silesiasecuritylab.com/services/training/#opinions – training participants from companies such as Oracle, Adobe, ESET, ING, Red Hat, Trend Micro, Philips, government sector... 

What Students Should Know
To get the most of this training intermediate knowledge of web application security is needed. Students should have experience in using a proxy, such as Burp Suite Proxy or Zed Attack Proxy (ZAP), to analyze or modify the traffic.

What Students Should Bring

Students will need a laptop with 64-bit operating system, at least 8 GB RAM, 35 GB free hard drive space, administrative access, ability to turn off AV/firewall and VMware Player/Fusion installed (64-bit version). Prior to the training, make sure there are no problems with running x86_64 VMs.

Additional notes

This new 3-day training was sold out at top security conferences e.g. DEF CON (Las Vegas), Hack In Paris (Paris).

This is a 100% hands-on training: for each attack, vulnerability and technique presented in this training, there is a lab exercise to help students develop their skills step by step.

Speakers
avatar for Dawid Czagan

Dawid Czagan

Founder and CEO, Silesia Security Lab
Dawid Czagan is an internationally recognized security researcher and trainer. He is listed among top hackers at HackerOne. Dawid Czagan has found security bugs in Apple, Google, Mozilla, Microsoft and many others.

Due to the severity of many bugs, he received numerous awards for his findings. Dawid Czagan shares his security experience in his hands-on trainings. He delivered trainings at key industry conferences such as DEF CON (Las Vegas), OWASP 2025 Global AppSec EU (Barcelona), Hack In The... Read More →
Tuesday June 23, 2026 9:00am - 5:00pm CEST

9:00am CEST

3-Day Training:AI Whiteboard Hacking aka Hands-on Threat Modeling Training
Tuesday June 23, 2026 9:00am - 5:00pm CEST
To register, please purchase your training ticket here. Training and conference are two separate ticket purchases.

3-Day Training: June 22-24, 2026
Level: Beginner
TrainerSebastien Deleersnyder

Download the complete training outline: AI Whiteboard Hacking Training Details

Testimonial: "After years evaluating security trainings at Black Hat, including Toreon's Whiteboard Hacking sessions, I can say this AI threat modeling course stands out. The hands-on approach and flow are exceptional - it's a must-attend."
- Daniel Cuthbert, Global Head of Cyber Security Research, Black Hat Review Board Member

In today's rapidly evolving AI landscape, security threats like prompt injection and data poisoning pose significant risks to AI systems. Our 3-day AI Whiteboard Hacking training equips you with practical skills to identify, assess, and mitigate AI-specific security threats using our proven DICE methodology. Through hands-on exercises and real-world scenarios, you'll learn to build secure AI systems while ensuring compliance with regulations like the EU AI Act.

The training concludes with an engaging red team/blue team wargame where you'll put theory into practice by attacking and defending a rogue AI research assistant. Upon completion, you'll earn the AI Threat Modeling Practitioner Certificate and gain access to a year-long subscription featuring quarterly masterclasses, expert Q&A sessions, and continuously updated resources.

Led by Sebastien Deleersnyder, co-founder and CTO of Toreon, and Black Hat trainer, this training combines technical expertise with practical insights gained from real-world projects across government, finance, healthcare, and technology sectors.

Quick Overview:
·       Target Audience: AI Engineers, Software Engineers, Solution Architects, Security Professionals
·       Prerequisites: Basic understanding of AI concepts (pre-training materials provided)
·       Certification: AI Threat Modeling Practitioner Certificate
·       Bonus: 1-year AI Threat Modeling Subscription included

Our lineup of the hands-on exercises from the training that let you put AI security concepts into practice:

Day 1: Foundations & Methodology
·       "AI Security Headlines from the Future" - Explore potential security scenarios
·       "Diagramming the AI Assistant Infrastructure" - Map out real AI system components
Speakers
avatar for Sebastien Deelersnyder

Sebastien Deelersnyder

Co-Founder and CEO, Toreon
Sebastien Deleersnyder, also known as Seba, is a highly accomplished individual in the field of cybersecurity. He is the CTO and co-founder of Toreon, as well as the COO and lead threat modeling trainer of Data Protection Institute. Seba holds a Master's degree in Software Engineering... Read More →
Tuesday June 23, 2026 9:00am - 5:00pm CEST
 
Wednesday, June 24
 

9:00am CEST

1-Day Training: API Security: Hands-On Secure API Design & Hardening
Wednesday June 24, 2026 9:00am - 5:00pm CEST

1-Day Training: June 24, 2026
Level: Intermediate
Trainer: Tanya Janca


To register, please purchase your training ticket here.
 Training and conference are two separate ticket purchases.

APIs are the backbone of modern applications—but they also introduce unique security risks. In this hands-on training, participants will deep-dive into API security threats using a "Bad, Better, Best" approach.

• Review real-world insecure APIs and step through progressive security improvements
• Work hands-on with OWASP DevSlop Pixi intentionally vulnerable API, 42Crunch IDE Plugin, and Semgrep to find, fix, and prevent API vulnerabilities
• Master the OWASP API Security Top Ten through guided code reviews and hands-on exercises
• Learn best practices for API security hardening, authentication, and monitoring

By the end of this session, participants will have the skills and tools to secure APIs with confidence using industry best practices.
Speakers
avatar for Tanya Janca

Tanya Janca

Security Trainer and Founder, She Hacks Purple & DevSec Station
Tanya Janca, known online as SheHacksPurple, is the best-selling author of Alice and Bob Learn Secure Coding and Alice and Bob Learn Application Security. She is the founder of DevSec Station, a modern learning platform and community built to help software developers master secure... Read More →
Wednesday June 24, 2026 9:00am - 5:00pm CEST

9:00am CEST

1-Day Training: How to build a Successful Security Champions Program
Wednesday June 24, 2026 9:00am - 5:00pm CEST
1-Day Training: June 24, 2026
Level: Intermediate
Trainer:Juliane Reimann & Marisa Fagan


To register, please purchase your training ticket here.
 Training and conference are two separate ticket purchases.

Do you feel a disconnect between your cybersecurity efforts and engineering activities? If so, a Security Champions Program could bridge the gap. By involving engineers in security topics that align with their work, a Security Champions program not only enhances security awareness but also fosters a culture of security across your organization. However, creating such a program requires careful planning, innovative strategies, and a solid understanding of what drives individuals to champion security initiatives.

This training will equip you with practical tools and actionable insights to design and launch a successful Security Champions Program. You’ll explore key concepts, including how to:
- Develop a foundational understanding of what a Security Champions Programs is
- Plan and navigate the phases of program development, from launch to long-term growth.
- Learn about strategies to engage and motivate diverse personality types within the organization
- Acquire practical tools and a structured approach to establish a scalable and trackable Security Champions Program

Whether you’re a security engineer, architect, or manager, this training will provide you with the tools and frameworks to collaborate effectively with your engineering teams and establish a thriving Security Champions Program.

The session is highly interactive, featuring hands-on exercises and team-based activities to encourage collaboration and networking with fellow professionals. Join us to gain the confidence and strategies you need to kickstart your journey toward a more secure organization.
Speakers
avatar for Juliane Reimann

Juliane Reimann

Founder and Security Community Expert, Full Circle Security
Juliane Reimann works as cyber security consultant for large companies since 2019 with focus on DevSecOps and Community Building. Her expertise includes building security communities of software developers and establishing developer centric communication about secure software development... Read More →
avatar for Marisa Fagan

Marisa Fagan

Managing Consultant, Katilyst
Marisa Fagan is a managing consultant at Katilyst and has 16 years experience building security champion communities. She's dedicated her career to building security into the SDLC and empowering developers to own secure code. Marisa shares practical insights into what actually works... Read More →
Wednesday June 24, 2026 9:00am - 5:00pm CEST

9:00am CEST

1-Day Training: Master AI Security (Hybrid)
Wednesday June 24, 2026 9:00am - 5:00pm CEST
1-Day Training: June 24, 2026
Level: Intermediate
Trainer: Rob van der Veer

You may attend this training course either in person or virtually

To register, please purchase your training ticket here. Training and conference are two separate ticket purchases.

The record-breaking Master AI security training is back!

This training broke the OWASP record online and on-site.

Your trainer is Rob van der Veer, Chief AI Officer at Software Improvement Group, with 33 years of AI experience, founder of the OWASP AI Exchange, co-editor for the AI Act security standard, member of the ISO/IEC 27090 for AI security, co-founder of OpenCRE, and main author of ISO 5338 on AI engineering.

Master AI security is a unique opportunity to become proficient in the intricate and rapidly evolving field of AI security.

The disruption by AI presents a significant challenge, regardless of whether you are a security professional, a developer, AI engineer, or a red teamer. What are your responsibilities? What constitutes the new AI attack surface, and what threats emerge from it? What measures can you take to mitigate these emerging risks?

This one-day intensive training program will equip you with the knowledge to tackle these AI-related challenges effectively, enabling you to apply what you learn immediately. Starting with a pragmatic overview of AI, the course then delivers an exhaustive exploration of the distinctive vulnerabilities AI introduces, the possible attack vectors, and the most current strategies to counteract threats like prompt injection, data poisoning, model theft, evasion, and more. Through practical exercises, you will gain hands-on experience in enacting strong security measures, attacking AI systems, conducting threat modelling on AI, and targeted vulnerability assessments for AI applications.

By day's end, you will possess a thorough comprehension of the core principles and techniques critical to strengthening AI systems. You will have gained practical insights and the confidence to implement cutting-edge AI security measures.

A key resource that is used in the training is the OWASP AI Exchange - the flagship project located at owaspai.org - which forms the foundation of ISO standard 27090 and the security standard of the AI Act.

The training is designed for all levels of attendees. as the material is new from the cutting edge of research and standardization. No in-depth security or AI knowledge is required, although some experience with either AI or security is helpful.

Attendees will be provided with handout slides and afterwards they can retrieve the unique Master AI security certificate.

Some testimonials of previous runs:
  • Stephan Cohen – BNP Paribas: “This training has significantly enhanced my understanding of both the challenges and controls in securing AI. Looking forward to applying these insights in my work. Thank you Rob for this course.”
  • Ramesh Krishnasaga - British Petroleum:  “The training was enlightening. This experience went beyond just training—it provided a strategic roadmap for securing AI applications in practical scenarios."
  • Jedidiah Y - S&P global: “A timely and essential training. The session was truly eye-opening! As a data scientist, I’ve always focused on building and optimizing models—accuracy, performance, and deployment. But this training completely shifted my perspective on the importance of security in AI systems."

Speakers
avatar for Rob van der Veer

Rob van der Veer

Chief AI Officer, Software Improvement Group
Rob van der Veer is an AI pioneer with 33 years of AI experience, specializing in engineering, security and privacy. He is the lead author of the ISO/IEC 5338 standard on AI lifecycle, contributor to OWASP SAMM, co-founder of OWASP's digital bridge for security standards OpenCRE... Read More →
Wednesday June 24, 2026 9:00am - 5:00pm CEST

9:00am CEST

1-Day Training: Secure-by-Design AI Applications: Identifying, Testing, and Validating AI-Specific Threats Before Deployment
Wednesday June 24, 2026 9:00am - 5:00pm CEST
1-Day Training: June 24, 2026
Level: Intermediate
Trainer: Marco Morana

**Threat Modeling book (85 euro value) free to the first 10 registrants**

To register, please purchase your training ticket here. Training and conference are two separate ticket purchases.

As organizations deploy LLMs, chatbots, RAG pipelines, and autonomous AI agents, new attack surfaces emerge that traditional application threat modeling cannot fully capture. This one-day course provides a practical, hands-on introduction to threat modeling AI applications, grounded in the OWASP AI Testing Guide, OWASP AI Exchange, NIST AI RMF, and Secure AI Framework (SAIF).

Participants learn how AI reshapes attack surfaces at the data, model, pipeline, and API layers, and how adversarial risks such as prompt injection, model theft, data poisoning, membership inference, and supply-chain compromise can be identified early and validated before deployment.

Through structured modeling exercises, ATLAS Navigator demos, AI SBOM analysis, attack-flow mapping, and secure-by-design patterns, learners translate AI threat models into actionable test cases aligned to OWASP AITG Test IDs and MITRE ATLAS. The course concludes with an end-to-end capstone where participants model and test a real-world LLM or RAG pipeline.

By the end of the workshop, participants will be able to identify, model, test, and validate AI-specific threats, embed AI testing into DevSecOps workflows, and operationalize AI threat modeling as a repeatable, testable practice for QA, security, and incident response.
Speakers
avatar for Marco Morana

Marco Morana

Field CISO- Head of Application & Product Security Architecture, Avocado Systems Inc.
Marco Morana is the Field CISO at Avocado Systems Inc., specializing in threat modeling automation and Zero Trust Architecture for financial services. With over 15 years of leadership experience, he has held senior security roles at JP Morgan Chase and Citi, securing financial applications... Read More →
Wednesday June 24, 2026 9:00am - 5:00pm CEST

9:00am CEST

2-Day Training: Adam Shostack's Threat Modeling Intensive
Wednesday June 24, 2026 9:00am - 5:00pm CEST
2-Day Training: June 23-24, 2026
Level: Intermediate
Trainer: Adam Shostack

To register, please purchase your training ticket here. Training and conference are two separate ticket purchases.

This hands-on, interactive class will focus on learning to threat model by executing each of the steps. Students will start with a guided threat modeling exercise, and we'll then iterate and break down the skills they're learning in more depth. We'll progressing through the Four Questions of Threat Modeling: what are we working on, what can go wrong, what are we going to do about it and did we do a good job. This is capped off with an end-to-end exercise that brings the skills together.
Speakers
avatar for Adam Shostack

Adam Shostack

Founder, Shostack & Associates
Adam Shostack is a leading expert on threat modeling. He has decades of experience delivering security. His experience ranges across the business world from founding startups to nearly a decade at Microsoft. His accomplishments include:  Helped create the CVE. Now an Emeritus member... Read More →
Wednesday June 24, 2026 9:00am - 5:00pm CEST

9:00am CEST

2-Day Training: AI SecureOps: Attacking & Defending AI Applications and Agents (Hybrid)
Wednesday June 24, 2026 9:00am - 5:00pm CEST
2-Day Training: June 23-24, 2026
Level: Intermediate
Trainer:Abhinav Singh

You may attend this training course in person or virtually

To register, please purchase your training ticket here. Training and conference are two separate ticket purchases.

Can prompt injections lead to complete infrastructure takeovers? Could AI agents be exploited to compromise backend services? Can jailbreaks create false crisis alerts in security systems? In multi-agent systems, what if an attacker takes over an agent’s goals, turning other agents into coordinated threats? This immersive, CTF-styled training in AI and LLM security dives into these pressing questions. Engage in realistic attack and defense scenarios focused on real-world threats, from prompt injection and remote code execution to backend compromise. Tackle hands-on challenges with actual AI applications & agentic systems to understand vulnerabilities and develop robust defenses. You’ll learn how to create a comprehensive security pipeline, mastering AI red and blue team strategies, building resilient defenses for AI apps & agents, and handling incident response for AI-based threats. Additionally, implement a Responsible AI (RAI) program to enforce ethical AI standards across enterprise services, fortifying your organization’s AI security foundation.

By the end of this training, you will be able to:

- Exploit vulnerabilities in AI applications to achieve code and command execution, uncovering scenarios such as instruction injection, agent control bypass, remote code execution for infrastructure takeover as well as chaining multiple agents for goal hijacking.
- Conduct AI red-teaming using adversary simulation, OWASP LLM Top 10, and MITRE ATLAS frameworks, while applying AI security and ethical principles in real-world scenarios.
- Execute and defend against adversarial attacks, including prompt injection, data poisoning, jailbreaks and agentic attacks.
- Perform advanced AI red and blue teaming through multi-agent auto-prompting attacks, implementing a 3-way autonomous system consisting of attack, defend and judge models.
- Develop LLM security scanners to detect and protect against injections, jailbreaks, manipulations, and risky behaviors, as well as defending LLMs with LLMs.
- Build and deploy enterprise-grade LLM defenses, including custom guardrails for input/output protection, security benchmarking, and penetration testing of LLM agents.
- Establish a comprehensive LLM SecOps process to secure the supply chain from adversarial attacks and create a robust threat model for enterprise applications.
- Implement an incident response and risk management plan for enterprises developing or using GenAI services.
Speakers
avatar for Abhinav Singh

Abhinav Singh

Cyber Security Research in AI,Cloud & Data, Midfield Security
Abhinav Singh is an esteemed cybersecurity leader and researcher with over a decade of experience working with global technology leaders, startups, financial institutions, and as an independent trainer and consultant. He is the author of the widely acclaimed "Metasploit Penetration... Read More →
Wednesday June 24, 2026 9:00am - 5:00pm CEST

9:00am CEST

2-Day Training: Repeatable, Scalable and Valuable Code Security Scanning
Wednesday June 24, 2026 9:00am - 5:00pm CEST
2-Day Training: June 23-24, 2026
Level: Intermediate
Trainer:Josh Grossman

To register, please purchase your training ticket here. Training and conference are two separate ticket purchases.

To learn more about this training, please visit the link here.

Suddenly anyone and everyone in your organization can use AI assistants to write code. Meanwhile, your actual developers are putting out 100x their previous output , with “varying” levels of quality. So how are you going to secure code at this scale?

This course is designed to be a deep dive into state-of-the-art techniques for validating code security within an organization’s codebase. The course has a strong emphasis on how AI-driven analysis can drive this forward whilst also clearly highlighting where standard, deterministic techniques (albeit incorporating AI acceleration) will be more effective.

During the course, you will learn how to combine these techniques, in a scalable and repeatable way, based on our experience doing just this with real organizations and real teams and with a focus on the current state of the art in this fast-moving area.

This course goes beyond the scope of standard application security knowledge and is designed to make you a specialist in this area. Having spent several years perfecting this process, we are excited to impart the lessons we have learnt!

The course is structured as follows:

* Overview – setting out the basic details of what we will be talking about in terms of code scanning and SAST.
* Key techniques – Discuss the different techniques which can be used for this including generic “off the shelf” SAST, deterministic custom scanning rules, and LLM powered custom AI prompts
* Technique comparison - Advantages and disadvantages of each technique based on our in-depth experience with each and which technique you will want to use in different situations, to avoid wasting time trying to use a technique in an inappropriate use case.
* Organizational process – How to get these processes built into an organization’s existing software lifecycle
* Generic SAST – Using “off the shelf” rules effectively to catch “low hanging fruit” and avoid reinventing the wheel.
* Custom SAST – Introduce custom rule languages (e.g., Semgrep, CodeQL), writing rules from scratch, and scaling analysis across a codebase.
* Basic AI Code Security Scanning – Overview of AI-based scanning, platforms, principles, and initial single-shot prompts.
* Complex AI Code Security Scanning – AI-driven techniques for code security, including using AI to review and triage findings and creating multi-stage rules that combine deterministic rules
Speakers
avatar for Josh Grossman

Josh Grossman

CTO, Bounce Security
Josh Grossman has worked as a consultant in IT and Application Security and Risk for 15 years now, as well as a Software Developer. This has given him an in-depth understanding of how to manage the balance between business needs, developer needs and security needs which goes into... Read More →
Wednesday June 24, 2026 9:00am - 5:00pm CEST

9:00am CEST

2-Day Training: The Mobile Playbook - A guide for iOS and Android App Security (Hybrid)
Wednesday June 24, 2026 9:00am - 5:00pm CEST
2-Day Training: June 23-24, 2026
Level: Intermediate
Trainer:Sven Schleier

You may attend this training course in person or virtually.

To register, please purchase your training ticket here. Training and conference are two separate ticket purchases.

This two-day, hands-on course is designed to teach penetration testers, developers, and engineers how to analyse Android and iOS applications for security vulnerabilities. The course covers the different phases of testing, including dynamic testing, static analysis, reverse engineering, and software composition analysis (SCA). We will also explore how you can use the Model Context Protocol (MCP) to automate some of these workflows and leverage its strengths.

The course is based on the OWASP Mobile Application Security Testing Guide (MASTG) and taught by one of the project co-leaders. This comprehensive, open-source mobile security testing book covers both iOS and Android, providing a methodology and detailed technical test cases to ensure completeness and utilizes the latest attack techniques against mobile applications. This course provides hands-on experience with open-source tools and advanced methodologies, guiding you through real-world scenarios.

Detailed outline

On the first day, we will start with an introduction to the OWASP MASVS and MASTG projects, including the latest updates. Then, we will dive into the Android platform and its security architecture. Students will no longer be required to bring their own Android device; instead, each student will be provided with a cloud-based, virtualised Android device from Corellium.

Topics include:

- Intercepting network traffic of an Android App in various scenarios, including intercepting traffic that is not HTTP.
- Scanning for secrets in an APK.
- Reverse engineering a Kotlin app and identifying and exploiting a real-world deep link vulnerability through manual source code review.
- Static Scanning of decompiled Kotlin source code by using MCP workflows with semgrep and radare2, identifying vulnerabilities and eliminating false positives.
- Frida crash course to get started with dynamic instrumentation on Android apps by using MCP workflows.
- Use dynamic instrumentation with Frida to bypass client-side security controls such as root detection mechanisms.
- We will close day 1 with a Capture the Flag (CTF) by attacking several apps, including a real world app and overcome it's protection mechanisms.

Day 2 focuses on iOS. We will begin the day by exploring the OWASP MASWE and creating an iOS test environment using Corellium and dive into several topics, including:

- Introduction into iOS Security fundamentals
- Intercepting network traffic of an iOS App in various scenarios, including intercepting traffic from apps written in mobile app frameworks such as Google's Flutter.
- How to retrieve an IPA, execute static scanning of an IPA and identifying vulnerabilities and eliminating false positives.
- Software Composition Analysis (SCA) for iOS by using SBOM's and scanning 3rd party libraries and SDKs in mobile package managers for known vulnerabilities and planning mitigation strategies.
- Frida crash course to get started with dynamic instrumentation for iOS applications and utilsing MCP workflows.
- Testing methodology with a non-jailbroken (jailed) device by repackaging an IPA with the Frida gadget.
- Analyse the storage of an iOS app and understand the various options on how (files, databases, logs etc.) and where files can be stored.
- Using Frida to bypass runtime instrumentation of iOS applications, like anti-Jailbreaking Mechanisms.

We'll wrap up the final day with a CTF and participants can win a prize!

Whether you are a beginner who wants to learn mobile app testing from the ground up, or an experienced pentester or developer or engineer who wants to improve your existing skills to perform more advanced attack techniques, this training will help you achieve your goals.

The course consists of many different hands-on labs developed by the instructor or using real world apps that are part of bug bounty platforms.

Upon successfully completing this course, students will have a better understanding of how to test for vulnerabilities in mobile applications, how to recommend appropriate mitigation techniques to developers and how to perform consistent and efficient testing using MCP (Model Context Protocol) workflows.
Speakers
avatar for Sven Schleier

Sven Schleier

Co-Founder, Bai7 GmbH
Sven is a co-founder of Bai7 GmbH in Austria, which is specialized in trainings and advisory. He has expertise in cloud security, offensive security engagements (Penetration Testing) and Application Security, notably in guiding software development teams across Mobile and Web Applications... Read More →
Wednesday June 24, 2026 9:00am - 5:00pm CEST

9:00am CEST

3-Day Training: Full-Stack Pentesting Laboratory: 100% Hands-On + Lifetime LAB Access
Wednesday June 24, 2026 9:00am - 5:00pm CEST
3-Day Training:June 22-24, 2026
Level: Intermediate
Trainer: Dawid Czagan


To register, please purchase your training ticket here. Training and conference are two separate ticket purchases.

Modern IT systems are increasingly complex, making full-stack expertise more essential than ever. That's why diving into full-stack pentesting is crucial—you will gain the skills needed to master modern attack vectors and implement effective defensive countermeasures.

For each attack, vulnerability and technique presented in this training, there is a lab exercise to help you develop your skills step by step. What's more, when the training is over, you can take the complete lab environment home to hack again at your own pace.

I found security bugs in many companies including Google, Yahoo, Mozilla, Twitter and in this training I'll share my experience with you.

Key Learning Objectives
After completing this training, you will have learned about:

- Hacking cloud applications
- API hacking tips & tricks
- Data exfiltration techniques
- OSINT asset discovery tools
- Tricky user impersonation
- Bypassing protection mechanisms
- CLI hacking scripts
- Interesting XSS attacks
- Server-side template injection
- Hacking with Google & GitHub search engines
- Automated SQL injection detection and exploitation
- File read & file upload attacks
- Password cracking in a smart way
- Hacking Git repos
- XML attacks
- NoSQL injection
- HTTP parameter pollution
- Web cache deception attack
- Hacking with wrappers
- Finding metadata with sensitive information
- Hijacking NTLM hashes
- Automated detection of JavaScript libraries with known vulnerabilities
- Extracting passwords
- Hacking Electron applications
- Establishing reverse shell connections
- RCE attacks
- XSS polyglot
- and more …

What Students Will Receive
Students will be handed in a VMware image with a specially prepared lab environment to play with all attacks, vulnerabilities and techniques presented in this training. When the training is over, students can take the complete lab environment home (after signing a non-disclosure agreement) to hack again at their own pace.

Special Bonus
The ticket price includes FREE access to my 6 online courses:

- Fuzzing with Burp Suite Intruder
- Exploiting Race Conditions with OWASP ZAP
- Case Studies of Award-Winning XSS Attacks: Part 1
- Case Studies of Award-Winning XSS Attacks: Part 2
- How Hackers Find SQL Injections in Minutes with Sqlmap
- Web Application Security Testing with Google Hacking

What Students Say About My Trainings
References are attached to my LinkedIn profile (https://www.linkedin.com/in/dawid-czagan-85ba3666/). They can also be found here: https://silesiasecuritylab.com/services/training/#opinions – training participants from companies such as Oracle, Adobe, ESET, ING, Red Hat, Trend Micro, Philips, government sector...
 
What Students Should Know
To get the most of this training intermediate knowledge of web application security is needed. Students should have experience in using a proxy, such as Burp Suite Proxy or Zed Attack Proxy (ZAP), to analyze or modify the traffic.

What Students Should Bring

Students will need a laptop with 64-bit operating system, at least 8 GB RAM, 35 GB free hard drive space, administrative access, ability to turn off AV/firewall and VMware Player/Fusion installed (64-bit version). Prior to the training, make sure there are no problems with running x86_64 VMs.

Additional notes

This new 3-day training was sold out at top security conferences e.g. DEF CON (Las Vegas), Hack In Paris (Paris).

This is a 100% hands-on training: for each attack, vulnerability and technique presented in this training, there is a lab exercise to help students develop their skills step by step.

Speakers
avatar for Dawid Czagan

Dawid Czagan

Founder and CEO, Silesia Security Lab
Dawid Czagan is an internationally recognized security researcher and trainer. He is listed among top hackers at HackerOne. Dawid Czagan has found security bugs in Apple, Google, Mozilla, Microsoft and many others.

Due to the severity of many bugs, he received numerous awards for his findings. Dawid Czagan shares his security experience in his hands-on trainings. He delivered trainings at key industry conferences such as DEF CON (Las Vegas), OWASP 2025 Global AppSec EU (Barcelona), Hack In The... Read More →
Wednesday June 24, 2026 9:00am - 5:00pm CEST

9:00am CEST

3-Day Training:AI Whiteboard Hacking aka Hands-on Threat Modeling Training
Wednesday June 24, 2026 9:00am - 5:00pm CEST
To register, please purchase your training ticket here. Training and conference are two separate ticket purchases.

3-Day Training: June 22-24, 2026
Level: Beginner
TrainerSebastien Deleersnyder

Download the complete training outline: AI Whiteboard Hacking Training Details

Testimonial: "After years evaluating security trainings at Black Hat, including Toreon's Whiteboard Hacking sessions, I can say this AI threat modeling course stands out. The hands-on approach and flow are exceptional - it's a must-attend."
- Daniel Cuthbert, Global Head of Cyber Security Research, Black Hat Review Board Member

In today's rapidly evolving AI landscape, security threats like prompt injection and data poisoning pose significant risks to AI systems. Our 3-day AI Whiteboard Hacking training equips you with practical skills to identify, assess, and mitigate AI-specific security threats using our proven DICE methodology. Through hands-on exercises and real-world scenarios, you'll learn to build secure AI systems while ensuring compliance with regulations like the EU AI Act.

The training concludes with an engaging red team/blue team wargame where you'll put theory into practice by attacking and defending a rogue AI research assistant. Upon completion, you'll earn the AI Threat Modeling Practitioner Certificate and gain access to a year-long subscription featuring quarterly masterclasses, expert Q&A sessions, and continuously updated resources.

Led by Sebastien Deleersnyder, co-founder and CTO of Toreon, and Black Hat trainer, this training combines technical expertise with practical insights gained from real-world projects across government, finance, healthcare, and technology sectors.

Quick Overview:
·       Target Audience: AI Engineers, Software Engineers, Solution Architects, Security Professionals
·       Prerequisites: Basic understanding of AI concepts (pre-training materials provided)
·       Certification: AI Threat Modeling Practitioner Certificate
·       Bonus: 1-year AI Threat Modeling Subscription included

Our lineup of the hands-on exercises from the training that let you put AI security concepts into practice:

Day 1: Foundations & Methodology
·       "AI Security Headlines from the Future" - Explore potential security scenarios
·       "Diagramming the AI Assistant Infrastructure" - Map out real AI system components
Speakers
avatar for Sebastien Deelersnyder

Sebastien Deelersnyder

Co-Founder and CEO, Toreon
Sebastien Deleersnyder, also known as Seba, is a highly accomplished individual in the field of cybersecurity. He is the CTO and co-founder of Toreon, as well as the COO and lead threat modeling trainer of Data Protection Institute. Seba holds a Master's degree in Software Engineering... Read More →
Wednesday June 24, 2026 9:00am - 5:00pm CEST
 
Thursday, June 25
 

10:30am CEST

Why Isn't the Fix in My Container? Tracking CVE Propagation Across 10,000 Projects
Thursday June 25, 2026 10:30am - 11:15am CEST
We analyzed CVE remediation patterns across 10,000 open source projects to uncover a critical problem: vulnerabilities fixed upstream often take weeks or months to reach downstream containers. This lag creates massive security exposure windows in Kubernetes environments.

In this talk, we'll present our findings showing how CVE fixes flow (or stall) across ecosystem layers, from upstream projects to package managers to base images to final containers. You'll see real metrics on remediation delays, and the compounding effect of layered dependencies.

But we won't stop at the problem. The second half focuses on practical solutions. From automated patch backporting to in-place image patching with tools like Copa. You'll learn how to build workflows that dramatically reduce MTTR, including dependency automation patterns and risk-based prioritization.

Attendees will leave with both a data-driven understanding of the CVE remediation challenge and a practical playbook for fixing it.
Speakers
avatar for Lior Kaplan

Lior Kaplan

Open Source evangelist, Open Source Security expert, Kaplan Open Source
As a Linux sysadmin for many years, Kaplan has being focused Open Source & Security from various perspectives - upstream projects, the Linux distributions and the DevOps / platform engineering teams who maintain the infrastructure.
Kaplan is a long time Open Source community membe... Read More →
avatar for Mor Weinberger

Mor Weinberger

Software Architect, Echo

Mor is a Software Architect specializing in cloud-native security and software supply chain resilience. His work focuses on designing scalable systems to detect and mitigate emerging threats across modern cloud environments. Over the years, he has identified issues ranging from unsecured... Read More →
Thursday June 25, 2026 10:30am - 11:15am CEST
Hall K1 (Level -2)

10:30am CEST

Builders & Breakers Part II: Securing Agentic AI After the Death of LLM Wrappers
Thursday June 25, 2026 10:30am - 11:15am CEST
Last year at OWASP Global AppSec Barcelona, we showed how to break and defend LLM-integrated apps: (indirect) prompt injection, jailbreaks, data poisoning. And what practical controls actually worked in production. But the game has changed.

This follow-up talk picks up where we left off, focusing on the next generation of LLM-driven systems: agentic AI and e.g. MCP (Model Context Protocol) & A2A (Agent2Agent). These systems combine LLMs with tools, memory, plugins, APIs, and planning loops, making them far more powerful, and also far more fragile.

We’ll walk through how this new architecture has shifted the attack surface, and why last year’s defences (input validation, injection prevention) don’t hold up anymore. Expect real-world attack paths: memory poisoning, tool misuse, and agent goal hijacking. Then we’ll show you what works: “Zero Trust”-style isolation, sandboxing tool execution, runtime plan validation, and defence patterns that are actually deployable.

This is not a theoretical talk. It’s a two-speaker format - builder and breaker - based on real-world incidents, internal and external red teaming, and live demos. If you’re building, securing, or reviewing AI-driven systems that do more than just chat, this is the session to see what’s coming and how to stay ahead.
Speakers
avatar for Javan Rasokat

Javan Rasokat

Senior Application Security Specialist, Sage

Javan is a DevOps Security Specialist at Sage, where he joined six years ago to lead Product Security for Central Europe and now supports products globally, contributing on the standardisation of security controls. He discovered his passion for security early in his career while identifying... Read More →
avatar for Rico Komenda

Rico Komenda

Senior Security Consultant

Rico is a senior product security engineer. His main security areas are in application security, cloud security, offensive security and AI security.

For him, general security intelligence in various aspects is a top priority. Today’s security world is constantly changing and you... Read More →
Thursday June 25, 2026 10:30am - 11:15am CEST
Hall G1 (Level -2)

10:30am CEST

AI Explainability Score Card
Thursday June 25, 2026 10:30am - 11:15am CEST
AI is tightening its grip on security operations, but when no one can explain what a system is doing, accountability breaks down and attackers gain the edge. Regulations like the EU AI Act now require AI systems to be transparent, yet most organizations lack a concrete way to measure what “transparent” actually means. The AI Explainability Scorecard fills that gap by providing a fast, practical way to assess whether an AI system is traceable and defensible, scoring it on faithfulness, comprehensibility, consistency, accessibility, and operational clarity, including for LLM-based systems. The takeaway is clear: if you cannot explain the results of your AI, it is running your business, not your people.
Speakers
avatar for Michael Novack

Michael Novack

Solution Architect, Aiceberg

Michael is a product-minded security architect who loves turning tangled AI risks into clear, practical solutions. As Solution Architect at Aiceberg, he helps enterprises bake AI explainability and real-time monitoring straight into their systems, transforming real customer insights... Read More →
Thursday June 25, 2026 10:30am - 11:15am CEST
Hall D (Level -2)

11:30am CEST

Actionable Continuous SBOM Diffing
Thursday June 25, 2026 11:30am - 12:15pm CEST
SBOMs are known to be at the forefront of modern strategies to ensure supply chain security. However, there are two key problems that traditional SBOM workflows do not solve: working with components that do not have well-established identifiers and the introduction of malware in the supply chain.

This presents a significant gap between the expectations of SBOM adoption and the real value it can deliver. This talk will explore the concept of applying continuous SBOM diffing as part of the CI process. Rather than analyzing an SBOM for each release as a standalone artifact, we can compute diffs and take actions based on whether something has changed from the previous component release.

This approach makes all SBOM components actionable, even those that otherwise seem meaningless. For example, if an individual file that is not part of any library appears in an SBOM, legacy approaches make it difficult to reason about such a file. However, with continuous SBOM diffing, tracking changes in such components becomes meaningful and therefore actionable. For example, if a new component file appears with an unknown origin, we can sanitize the build and conduct additional investigations into what happened.

We will also demonstrate practical examples of how to achieve such actionable workflows using open-source tooling.
Speakers
avatar for Pavel Shukhman

Pavel Shukhman

CEO, Reliza

Pavel Shukhman is Co-Founder and CEO of Reliza, where he oversees the company's efforts in managing software and hardware releases, xBOMs, versioning and component identification. With over a decade of experience leading software teams, he has helped organizations implement DevOps... Read More →
Thursday June 25, 2026 11:30am - 12:15pm CEST
Hall K1 (Level -2)

11:30am CEST

Authorization Is Where Your App Goes to Lie
Thursday June 25, 2026 11:30am - 12:15pm CEST
Your authorization logic probably lives in code, while the rationale behind it lives only in people’s heads.

That’s why authorization breaks in familiar ways: a missing check, an incorrect assumption, a copied snippet that made sense in one endpoint but was entirely wrong for another.

This talk is about making authorization logic visible earlier, during design, so engineers have something concrete to implement and reviewers have something concrete to critique. We’ll walk through a lightweight, design-time template that turns “who should be able to do what” into a structured artifact that can later be translated into policy-as-code, tested, and enforced consistently.

No new tools required; the focus is on a design-time step that fits cleanly into architecture reviews and threat modeling, and makes authorization easier to get right.
Speakers
avatar for Eden Yardeni

Eden Yardeni

Senior AppSec Engineer

Eden Yardeni works in application security, and contributes to OWASP projects including ASVS. She previously worked as a full-stack developer, but moved into application security when she heard there would be cookies.    linkedin.com/in/eden-yardeni/
... Read More →
Thursday June 25, 2026 11:30am - 12:15pm CEST
Hall D (Level -2)

11:30am CEST

Developing Effective Security Testing Skills with Objective Structured Assessments
Thursday June 25, 2026 11:30am - 12:15pm CEST
Technical skill development and evaluation for application (software) security testers remains underdeveloped. There is no widely adopted framework defining core competencies, proficiency levels, or objective assessment criteria. In the absence of such standards, the industry has defaulted to a fragmented ecosystem of private organizations offering training and certifications that insufficiently prepare the next generation of security testers for real-world testing.

This environment disproportionately rewards those who benefit from exceptional mentorship or possess the time, resources, and aptitude for intensive self-directed learning. The popular mantra “Try Harder” reflects this culture of self-made expertise, but it also serves as a substitute for formalized training models. Further, aspiring security professionals are left to

In contrast, more mature, life-critical disciplines that demand high levels of technical skill (such as aviation and surgery) are built upon standardized curricula, clearly defined skill progressions, and objective methods for evaluating competence. This is not by chance; over many decades, these (and related) fields have honed in how to achieve optimal outcomes through evidence-based training programs and practices.

In this talk, we will examine the past, present, and prospective future of application security tester training in comparison to more mature professions that demand a high level of technical skill. We will introduce a novel framework for evaluating technical skills and demonstrate its application in combination with a comprehensive AppSec curriculum. Both the assessment framework and the curriculum will be released to the open-source community at the time of presentation.
Speakers
avatar for Ryan Armstrong

Ryan Armstrong

AppSec Manager, Tester, and Teacher, Digital Boundary Group (DBG)
Ryan Armstrong is the Manager of Application Security Services at Digital Boundary Group (DBG). Ryan began with DBG as an application penetration tester and security consultant following completion of his PhD in Biomedical Engineering at Western University in 2016. With a passion... Read More →
Thursday June 25, 2026 11:30am - 12:15pm CEST
Hall G2 (Level -2)

1:15pm CEST

OWASP ModSecurity
Thursday June 25, 2026 1:15pm - 1:45pm CEST
As the cornerstone of open-source Web Application Firewalls, OWASP ModSecurity has protected the web for decades. However, maintaining its relevance in today’s evolving threat landscape requires more than just incremental updates—it requires a fundamental modernization. This presentation dives deep into the recent engineering efforts aimed at transforming the ModSecurity codebase into a leaner, more robust, and future-proof security engine.

Key highlights include:

* Code Quality & Refactoring: How we addressed technical debt and implemented stricter development standards.

* New Features: A look at the latest functionalities designed to counter sophisticated web attacks.

* Dependency Management: The rationale behind removing abandoned libraries and the technical challenges involved.

* The Path to a New Version: Why a major version update became necessary and what it means for the community.

* Beyond the Code: A brief look at the supporting ecosystem, including the complete renewal of the official website and documentation.

Attendees will gain a clear understanding of the architectural decisions shaping the next era of ModSecurity and what to expect from the upcoming releases.
Speakers
avatar for Ervin Hegedus

Ervin Hegedus

Project Co-Lead, OWASP ModSecurity
I'm 54, system and software engineer. ModSecurity contributor since 2017, Coreruleset developer since 2019, OWASP member since 2021 and project co-leader since 2024.
Thursday June 25, 2026 1:15pm - 1:45pm CEST
Room -2.82 (Level 2)

1:15pm CEST

The Velocity Paradox: Why Slow is Smooth and Smooth is Fast in AppSec
Thursday June 25, 2026 1:15pm - 2:00pm CEST
Many AppSec programs fail because they try to run before they can walk. But in the world of ever changing attack surface, the truth is - Slow is smooth, smooth is fast, and 'smooth' is how we actually ship secure software at the speed of business.

This presentation outlines our multi-phased methodology for establishing an AppSec program. This approach emphasizes incremental, measurable, and sustainable goals throughout the journey. I will share ‘why, what and how’ of each major business-tailored adoption of frameworks like OWASP SAMM, Security Champions Guide and open source solutions. This talk will cover both cultural and technical aspects of the program, ranging from pushback from development to customization of language-specific-SAST policies to measuring the value with KPIs.

Application security practitioners will be able to use the strategy shared in this talk to build and scale the AppSec program aligned with their business goals.
Speakers
avatar for Pramod Rana

Pramod Rana

Sr. Manager - Application Security Assurance, Netskope

Pramod Rana is author of below open source projects:
1) Omniscient - LetsMapYourNetwork: a graph-based asset management framework
2) CICDGuard - Orchestrating visibility and security of CICD ecosystem
3) vPrioritizer - Art of Risk Prioritization: a risk prioritization framework

He ha... Read More →
Thursday June 25, 2026 1:15pm - 2:00pm CEST
Hall K2 (Level -2)

1:45pm CEST

OWASP KubeFIM: Detecting File Integrity Threats with eBPF & AI in Kubernetes
Thursday June 25, 2026 1:45pm - 2:15pm CEST
Introduction

File Integrity Monitoring is still a critical part of runtime security, but in Kubernetes it comes with new challenges. A single cluster can generate thousands of file system events per second across containers, nodes, and workloads. While eBPF allows us to safely and efficiently capture these events at the kernel level, interpreting them remains a hard problem.

OWASP KubeFIM AI is built to address this gap.

This session presents how KubeFIM AI sits on top of the OWASP KubeFIM Agent and analyzes kernel-level File Integrity Monitoring events collected via eBPF. Instead of treating each event as an alert, KubeFIM AI focuses on reasoning over events by correlating them with Kubernetes context such as pods, namespaces, images, and workload behavior.

Technical Details and Future Roadmap

The talk will cover:

1. Why raw eBPF-based FIM events are difficult to use at scale

2. What kernel-level file operations actually tell us during real attacks

3. How KubeFIM AI models file behavior over time instead of reacting to single events

4. Using Kubernetes context to distinguish expected behavior from suspicious activity

5. How AI can reduce noise, explain intent, and improve triage without hiding technical details

Rather than using a generic large language model, KubeFIM AI is designed around a domain-specific approach, trained to understand file system behavior, container lifecycles, and Kubernetes runtime patterns. The focus is on producing human-readable security insights.

The session will also discuss the roadmap for the project, including plans to improve detection accuracy, reduce alert fatigue, and assist security teams with faster incident response in cloud-native environments.

Explain why KubeFIM AI Is Not a SIEM Replacement

KubeFIM AI is not designed to replace a SIEM. It solves a different problem at a different layer of the stack.

SIEM platforms focus on collecting, storing, and correlating logs and alerts from many sources across an organization. They are built for visibility, compliance, long-term retention, and investigation across applications, cloud services, networks, and users.

KubeFIM AI operates much closer to the system. It works at the Linux kernel level using eBPF to observe file system behavior inside Kubernetes nodes and containers. Its primary role is to generate high-quality runtime security signals, not to aggregate logs or manage incidents.

The project intentionally avoids becoming a central log store or alerting platform. Instead, it focuses on understanding why a file change occurred, whether it matches expected workload behavior, and whether it may indicate a security issue. This analysis happens before data is sent anywhere else.

In practice,
Speakers
avatar for Abhijit Chatterjee

Abhijit Chatterjee

Co-Founder of Cyber Secure India (CSI), Cyber Secure India
Co-Founder of Cyber Secure India (CSI), a cybersecurity think tank focused on driving cybersecurity awareness, building a strong community through free education, sharing knowledge, and empowering young individuals to strengthen the digital infrastructure.
Thursday June 25, 2026 1:45pm - 2:15pm CEST
Room -2.82 (Level 2)

2:15pm CEST

From 0 to SLSA Level 3: A Practitioner's Field Guide
Thursday June 25, 2026 2:15pm - 3:00pm CEST
SLSA (Supply-chain Levels for Software Artifacts) promises to secure your software supply chain—but implementing it at enterprise scale is harder than the spec suggests. This talk shares our journey to SLSA Level 3, including the architectural decisions, performance trade-offs, and customer escalations that shaped our approach.

You'll learn:
- Provenance attestation architecture for multi-tenant CI/CD pipelines
- How to integrate SLSA verification without breaking existing workflows
- Real metrics: what SLSA costs in CI minutes and what attacks it actually catches
- Common implementation pitfalls and how to avoid them

Whether you're just starting your SLSA journey or stuck at Level 2, walk away with battle-tested patterns that work at scale.
Speakers
avatar for Mark Mishaev

Mark Mishaev

Senior Engineering Manager, Software Supply Chain Security, Gitlab

Senior Manager of Software Supply Chain Security at GitLab, leading 40+ engineers across Authentication, Authorization, Pipeline Security, and Compliance teams. He drives GitLab's SLSA implementation and security architecture for CI/CD pipelines serving millions of developers.
Wit... Read More →
Thursday June 25, 2026 2:15pm - 3:00pm CEST
Hall K1 (Level -2)

2:15pm CEST

Human Rights Threat Modeling
Thursday June 25, 2026 2:15pm - 3:00pm CEST
Security and privacy threat models are fundamental tools in AppSec, but in modern systems, such as Identity and Access Management (IAM) and AI, they fail to intercept a growing class of threats: those that do not compromise the system but produce harm to people.

In this talk, we show why traditional threat models fail to capture these problems and how the limitation is not technical but cognitive. Human rights concepts are too abstract for many technicians, just as security was for developers before Threat Modeling became a facilitated and shared practice.

Through a concrete use case on IAM - extendable directly to AI systems - we present an approach that integrates Threat Modeling and harm modeling through a structured facilitation process, supported by cards and serious games.

The goal is not to turn developers into human rights experts but to make these threats visible, debatable, and mitigable using familiar AppSec tools.
Speakers
avatar for Giovanni Corti

Giovanni Corti

Cybersecurity Researcher, FBK

Cybersecurity professional specializing in cyber threat intelligence and in threat modeling for security, privacy, and user safety in high-risk systems.
  linkedin.com/in/g-corti
... Read More →
avatar for Simone Onofri

Simone Onofri

Security Lead, W3C

Simone is the W3C Security Lead. He has 20+ years of expertise in red/blue Teaming and Web security. He has spoken at OWASP, TEDx, and other events and authored Attacking and Exploiting Modern Web Applications.    linkedin.com/in/simoneonofri
... Read More →
avatar for Luca Lumini

Luca Lumini

Executive Security Advisor

Executive Security Advisor with more than 20 years of consulting experience focusing on corporate cyber strategy and security risk advisory, as Chief Security Officer Luca has been leading the Security Strategy and AI Innovation team for the AXA International Markets region. He is... Read More →
Thursday June 25, 2026 2:15pm - 3:00pm CEST
Hall D (Level -2)

2:15pm CEST

Taming the AppSec Data Deluge
Thursday June 25, 2026 2:15pm - 3:00pm CEST
Application Security engineers face a critical challenge: information overload from disparate security tools create “decision paralysis”. How do you balance design reviews, threat modeling, code reviews, monitoring alerts and managing your bug bounty program in an intentional instead of ad-hoc or reactive way?

This presentation demonstrates a novel approach using AI agents combined with Model Context Protocol (MCP) servers to automate work discovery and prioritize intelligently. Through practical examples, I'll show how Claude Code integrates with existing enterprise infrastructure—including issue tracking systems, content management platforms, Cloud Security Posture Management (CSPM) tools, and version control systems—to create an autonomous triage and prioritization engine.

You'll see how AI agents can pull together security data from all your different tools, figure out what actually matters based on your business context and threat intel, and spit out a prioritized to-do list that makes sense. I'll walk through real examples showing how this approach cuts down remediation times and helps you cover more ground with the same resources.
Speakers
avatar for Ben Sleek

Ben Sleek

Security Engineer, Proof

I’m an ex-Developer turned Application Security Engineer currently employed by Proof. After 10 years of building applications, I discovered breaking them could be just as fun.
  linkedin.com/in/ben-sleek-243aaa1/
... Read More →
Thursday June 25, 2026 2:15pm - 3:00pm CEST
Hall K2 (Level -2)

2:15pm CEST

This Build can Break You - Evil Runners and eBPF for Detection
Thursday June 25, 2026 2:15pm - 3:00pm CEST
CI/CD pipelines play an important role in modern software development. From a security perspective, this methodology contributes to more secure products, as automated checks can be applied on every run. Developers define tasks in a metadata file, and the system executes the defined jobs automatically. But what if the build chain itself becomes the security problem, allowing attackers to manipulate artifacts or take control of backend infrastructure? Let’s take a deep dive into “Poisoned Pipeline Execution” (OWASP CICD-SEC-4).

Builds are typically carried out in multiple steps using Runners—agents that pick up jobs and execute build instructions. These instructions, such as compiling a program or building a container image, are usually performed inside containers. Containers may provide isolation, but the effectiveness in terms of security strongly depends on the Runner’s configuration. Attackers can abuse Runners to execute arbitrary commands, leading to information disclosure or privilege escalation. While such attacks are well documented, effective detection mechanisms are often lacking.

Any viable detection method must be independent of the source code, language-agnostic, and container-friendly. The eBPF technology, which enables tracing of kernel-level activity, is well suited for this purpose. In this talk, we explore security vulnerabilities in CI Runners, how they become targets for attackers, and how malicious activities can be detected using eBPF.
Speakers
avatar for Reinhard Kugler

Reinhard Kugler

Principal Security Consultant, SBA Research

Reinhard’s focus relies on security testing of IT and industrial cyber-physical systems. Based on his prior experience in cyber defense, he works with companies to develop security capabilities and secure products. Reinhard is an experienced instructor and develops tailored security... Read More →
Thursday June 25, 2026 2:15pm - 3:00pm CEST
Hall G2 (Level -2)

3:30pm CEST

The Devil is in the Defaults - what to do about XSS
Thursday June 25, 2026 3:30pm - 4:15pm CEST
This session is about latest defenses against Cross-Site Scritping (XSS), the most prevalent security issue of all times. We will showcase typical XSS bugs and how they can be avoided. We will also explain why previous mechanisms fall short of protecting web sites at scale and why we believe Trusted Types and the Sanitizer API can help closing this gap.
The presentation will also give hands-on advice to enable security and development teams adopting these new protections. We will close with a bit on security considerations and remainign risks.
Speakers
avatar for Frederik Braun

Frederik Braun

Security Engineer, Mozilla Firefox Berlin

Frederik Braun builds security for the web and for Mozilla Firefox from Berlin. As a contributor to standards, Frederik is also improving the web platform by bringing security into the defaults with specifications like the Sanitizer API and Subresource Integrity. Before Mozilla, Frederik... Read More →
Thursday June 25, 2026 3:30pm - 4:15pm CEST
Hall G1 (Level -2)

3:30pm CEST

Agile Development and IT Security – From Conflict to Collaboration
Thursday June 25, 2026 3:30pm - 4:15pm CEST
Agile software development and IT security share the goal of delivering reliable, robust software, yet they often collide in practice. Security validation is still frequently deferred to the end of the development lifecycle, producing findings too late to be effectively addressed. Under delivery pressure, this can lead to defensive reactions toward security activities and tools. This talk explores why security issues are detected yet may not be processed soon and shows how integrating security early and continuously can transform friction into collaboration.
Speakers
avatar for Juliane Reimann

Juliane Reimann

Founder and Security Community Expert, Full Circle Security
Juliane Reimann works as cyber security consultant for large companies since 2019 with focus on DevSecOps and Community Building. Her expertise includes building security communities of software developers and establishing developer centric communication about secure software development... Read More →
avatar for Elisa Erbe

Elisa Erbe

Project Manager, FullCyrcle Security

Elisa Erbe has been working as a project manager in digital web solutions and cybersecurity companies since 2021, with a focus on agile planning and processes. Before transitioning into project management in the IT sector, she gained experience in teaching, research, and organizational... Read More →
Thursday June 25, 2026 3:30pm - 4:15pm CEST
Hall K2 (Level -2)

3:30pm CEST

Boiling the Ocean for Signal: Lessons from High-Volume OSS Malware Detection
Thursday June 25, 2026 3:30pm - 4:15pm CEST
Malicious open source packages are on the rise, targeting more and more ecosystems. And while open source maintainers and users struggle to secure the immense attack surface of today’s software development practice, attackers continue to evolve their techniques.

This talk presents lessons learned from developing and operating an end-to-end malware detection pipeline in an enterprise setup that automatically scans tens of thousands packages a day, and is followed by human review of reported malware. It provides an overview about and fundamental design decisions, starting from a suitable classification scheme and the selection of meaningful signals with a low signal-to-noise ratio, to the compilation of Indicators of Compromise and the final reporting of confirmed malicious packages to the respective registries and third-party databases like OSV. The individual sections and learnings will be motivated and illustrated through real-world samples as well as descriptive statistics obtained from our system.

Session attendees will learn about:
- Latest open source malware trends,
- common evasion techniques used by attackers, from encoding techniques, code transformations and payload splitting to prompt instructions aiming to sabotage LLM-based detectors,
- the shortcomings of current malware datasets in regard to supporting developers in the evaluation of malware scanners, e.g., the lack of accompanying metadata and qualitative descriptions,
- the importance and complementarity of code and metadata-based detection signals,
- requirements and design decisions for an end-to-end OSS malware scanner, e.g., the realization that a binary classification benign/malicious is not colorful enough for the breadth of software distributed through OSS registries like npm or PyPI, and
- descriptive statistics obtained from our system, showing the prevalence of techniques used in the wild, e.g., the prevalence of different malware triggers and targeted platforms.

As such, the presentation targets both open source users interested in the latest malware trends and safeguards, as well as builders wanting to create an end-to-end OSS scan pipeline, e.g., because their ecosystem is already targeted by attackers but not yet or not sufficiently covered by state-of-the-art scanners.
Speakers
avatar for Henrik Plate

Henrik Plate

Security Researcher, Endor Labs

In his current position, Henrik aims at improving the security of today’s software supply chains, and in particular the secure consumption of open source. He formerly worked for SAP Security Research, where he led the focus topic "open source security" starting in 2014. He co-authored... Read More →
Thursday June 25, 2026 3:30pm - 4:15pm CEST
Hall G2 (Level -2)
 
Friday, June 26
 

10:30am CEST

Your Localhost Is Lying to You: Trust Boundary Failures in Enterprise SSO
Friday June 26, 2026 10:30am - 11:15am CEST
When an attacker lands on a user’s machine, your SSO should not hand them the keys to your network. Yet many enterprise systems do because they assume localhost subdomains are safe. They are not.

This talk shows how a common DNS misconfiguration (localhost.target.com → 127.0.0.1), combined with domain-wide cookies (Domain=.target.com), allows a locally executed request context to inherit an authenticated session. No XSS. No phishing. Just browser-native behavior.

This flaw is rarely detected by scanners or standard penetration tests, yet it appears in real enterprise deployments today. The session presents a practical testing methodology, a defensive checklist, and research-based validation techniques to assess this class of trust boundary failure safely.

Attendees will leave able to identify and fix this issue in their own SSO deployments next week.
Speakers
avatar for Rupesh Kumar

Rupesh Kumar

Application Security Researcher | Red Team Practitioner

Rupesh Kumar is an offensive security researcher with 1.5 years of experience in web application testing, vulnerability research, and red team operations. He has reported critical and high-severity vulnerabilities to organizations across government, defense, healthcare, and critical... Read More →
Friday June 26, 2026 10:30am - 11:15am CEST
Hall G2 (Level -2)

11:30am CEST

Infrastructure Doesn’t Lie: Using Infrastructure Signals to Detect Shadow AI Built Applications
Friday June 26, 2026 11:30am - 12:15pm CEST
AI app builders now enable production apps to ship without repositories, CI/CD, or security review, often by non-traditional developers outside established engineering workflows. These Shadow AI apps bypass AppSec pipelines and governance, creating a growing blind spot in enterprise environments. This talk demonstrates how DNS, TLS, and hosting signals can detect shadow AI apps that existing controls miss.
Speakers
avatar for Balachandra Shanabhag

Balachandra Shanabhag

Product Security Lead, Cerebras

Bala is working as Staff security Engineer for Cohesity. Bala has over 15 years of experience in various domains of cybersecurity. Bala Joined Cohesity as Founding Product Security Engineer and helped boot strap Appsec and other security initiatives. Before Cohesity Bala worked at... Read More →
Friday June 26, 2026 11:30am - 12:15pm CEST
Hall K1 (Level -2)

11:30am CEST

Phishing for Passkeys - An Analysis of WebAuthn and CTAP
Friday June 26, 2026 11:30am - 12:15pm CEST
WebAuthn was supposed to replace passwords on the web: uniform, secure, manageable authentication for everyone! One of its unique selling points was supposed to be the impossibility of phishing attacks. When Passkeys were introduced, some of WebAuthn's security principles were watered down in order to achieve some usability improvements and thus reach more widespread adoption.

This presentation discusses the security of Passkeys against phishing attacks. It explains the possibilities for an attacker to gain access to accounts secured with Passkeys using spear phishing, and what conditions must be met for this to happen. It also practically demonstrates such an attack and discusses countermeasures.

Participants will learn which WebAuthn security principles still apply to Passkeys and which do not. They will learn why Passkeys are no longer completely phishing-proof and how they can evaluate this consideration for their own use of Passkeys.
Speakers
avatar for Michael Kuckuk

Michael Kuckuk

Fullstack Developer, inovex

As a fullstack software developer, Michael's main expertise lies in simple software development. But since he is well aware that the happy path is the easy part, he's always had an interest for security and he's always been very security- and privacy-aware in his work. He enjoys developing... Read More →
Friday June 26, 2026 11:30am - 12:15pm CEST
Hall D (Level -2)

11:30am CEST

Effort is All You Need: Testing LLM Applications in the Real World
Friday June 26, 2026 11:30am - 12:15pm CEST
Security testing of GenAI systems is often reduced to "LLM red teaming": probing a model in isolation to see what unsafe/offensive content it will generate. In practice, this approach falls short. As security practitioners, we need to assess complete LLM application use cases, focusing on how inputs and outputs propagate through application logic and enable concrete security risks such as data exfiltration, cross-site scripting, and authorization bypass.

In this talk, we share practical experience and supporting open-source tooling we developed for assessing LLM applications. These focus on testing systems where the LLM is embedded in application logic rather than exposed as a simple inference endpoint.

It covers approaches for testing non-conversational GenAI workflows, WebSockets, and custom APIs; building scoped prompt injection datasets aligned with application logic and engagement constraints; applying effort-based jailbreak techniques (e.g. anti-spotlighting, best-of-n, crescendo, ...) to evaluate guardrail robustness and demonstrate practical bypasses; and conducting meaningful testing in isolated or air-gapped environments.

Speakers
avatar for Donato Capitella

Donato Capitella

Principal Security Consultant, Reversec

Donato Capitella is a Software Engineer and Principal Security Consultant at Reversec, with over 15 years of experience in offensive security and software engineering. Donato spent the past 3 years conducting research and assessments on Generative AI applications, covering topics... Read More →
avatar for Thomas Cross

Thomas Cross

Security Consultant, Reversec

Friday June 26, 2026 11:30am - 12:15pm CEST
Hall G2 (Level -2)

1:15pm CEST

The OG OWASP Top 10 Might Be Back Thanks to Agentic Browsers
Friday June 26, 2026 1:15pm - 2:00pm CEST
Agentic browsers are quickly becoming one of the most powerful—yet dangerous—applications of agentic AI. By combining web navigation, content interpretation, and direct action taking, they act as a universal gateway to almost any service or application on the internet.

That power quietly reintroduces web security risks many teams assumed were behind us. Agentic browsers read and react to untrusted web content, follow instructions embedded in pages, images, and hidden text, and then execute actions inside real sessions.

The result is that classic web attack patterns made popular 20+ years ago when the first OWASP Top 10 was introduced may be back.

Things like injection manipulations, cross-site scripting payload delivery, CSRF-style action abuse, broken access control, and cross-origin boundary failures—now executed by autonomous agents instead of users.

This talk examines why current agentic browser designs break core web security assumptions around origins, cookies, and session boundaries, and why common mitigations such as human-in-the-loop controls introduce friction and fatigue without solving the underlying problem. We'll argue that unrestricted multi-site agents are fundamentally unsafe, and share better approaches based on domain-scoped agents, strict isolation, and secure multi-agent orchestration.
Speakers
avatar for Lidan Hazout

Lidan Hazout

CTO and Co-Founder, Capsule Security

Lidan has been programming since childhood, driven by a deep passion for data and AI. He previously served as VP of R&D at SecuredTouch, where he helped pioneer behavioral biometrics. Following the company’s acquisition by Ping Identity, the technology he led became a core component... Read More →
avatar for Bar Kaduri

Bar Kaduri

Head of Research, Capsule Security

Bar Kaduri is a cybersecurity researcher, leader, and international speaker with over 14 years of experience in cloud security, software supply-chain risk, and emerging AI threats. With hands-on expertise in evaluating and stress-testing AI systems, Bar focuses on building practical... Read More →
Friday June 26, 2026 1:15pm - 2:00pm CEST
Hall G1 (Level -2)

1:15pm CEST

AI-Generated Code vs Human Code. Who Really Writes More Vulnerabilities
Friday June 26, 2026 1:15pm - 2:00pm CEST
When AI coding tools entered mainstream development, the application security community reacted fast and loudly. Many warned that AI would dramatically increase vulnerabilities. The most common argument was simple and intuitive. AI models were trained on vast amounts of real-world code, including insecure and vulnerable code. Garbage in, garbage out. If AI learned from vulnerable code, it would inevitably reproduce those vulnerabilities at scale.

This claim quickly became accepted wisdom, despite the fact that almost no one could actually prove it.

This session presents a data-driven examination of that assumption. By correlating reported security vulnerabilities with automated line-level code attribution, we were able to determine whether a vulnerability originated in AI-generated code or human-written code. This allowed us to move the discussion from fear and intuition to measurable evidence.

The results are more nuanced and more interesting than the prevailing narrative suggests. In some scenarios, AI-generated code showed higher vulnerability density. In others, it performed comparably to, or even better than, human-written code. The differences are not accidental. They correlate strongly with the model used, the tooling, and how developers interact with AI, rather than AI usage alone.

This talk challenges the notion that AI coding is inherently insecure. It replaces the garbage-in, garbage-out argument with concrete data, identifies where the real risks actually emerge, and explains what this means for modern AppSec strategy. Attendees will leave with evidence they can use to recalibrate policies, controls, and conversations around AI-assisted development, without slowing teams down or relying on assumptions.
Speakers
avatar for Eitan Worcel

Eitan Worcel

CEO & Co-Founder, Mobb

Eitan Worcel is the co-founder and CEO of Mobb. He has close to 20 years of experience in application security, spanning hands-on software development, product leadership, and executive roles. Throughout his career, Eitan has worked closely with engineering and security teams to understand... Read More →
Friday June 26, 2026 1:15pm - 2:00pm CEST
Hall D (Level -2)

1:15pm CEST

What Our Pen Tests Never Found — And How Attackers Did
Friday June 26, 2026 1:15pm - 2:00pm CEST
Penetration testing is a crucial part of application security practices, yet attackers often succeed in ways no test ever reported. No injection, no memory corruption, no failed authentication. The applications behaved exactly as designed — and that was enough.

In this talk, we will explore what penetrating testing is intended to detect and how attackers actually compromise the systems. This talk will address why well-scoped penetration testing frequently revealed "no critical findings" while attackers later leveraged legitimate workflows, permission assumptions, and trust boundaries to cause serious harm.
Based on real world examples and post incident analysis, this talk will walk through security issues that were frequently overlooked during testing, not because testers lacked skills, but because the testing process made assumptions that attackers did not follow. We will focus on examining the blind spots in the penetration testing process, which include behaviors that only appear in production, cross-feature chaining, abuse of business logic, and trust assumptions built into system architecture.

The objective of this talk will be to comprehend where pen testing ends and how defenders might modify their testing tactics accordingly, rather than to replace it. This talk will break down the classes of issues pen tests routinely miss, how attackers discover them post-deployment, and what changed when testing strategies shifted from endpoint coverage to adversary-aware validation.

Attendees will leave with practical techniques to evolve their AppSec testing without increasing cost or abandoning penetration testing.
Speakers
avatar for Ramya M

Ramya M

Application Analyst, Okta, Inc,

Ramya M is a cybersecurity professional, currently working at Okta, Inc., specializing in application security, product security, identity security, and secure SDLC automation. She has led enterprise-scale initiatives across secure coding, DevSecOps hardening, vulnerability triage... Read More →
Friday June 26, 2026 1:15pm - 2:00pm CEST
Hall G2 (Level -2)

1:15pm CEST

Finding strange things in binaries (Workshop)
Friday June 26, 2026 1:15pm - 3:00pm CEST
OWASP Demo Lab - Hands-On Workshop / Small Group Session
Zone 1

Internal development teams and external suppliers love producing binaries for ease of deployment and distribution. Binary formats, however, make security analysis and compliance more complex for the security and OSPO teams. The good news is that the team behind OWASP dep-scan maintains a couple of binary analysis tools (OWASP blint and OWASP dosai). We show how these two tools can help defenders find strange things in binaries and help with your software transparency journey.

The session will be technical showcasing blint and dosai to analyse complex binaries to identify capabilities, risks, and threats. Users can walk away with new knowledge about modern techniques related to binary SBOM generation, Source line to Assembly instruction mapping, security capabilities analysis, and more.

https://github.com/owasp-dep-scan/blint
https://github.com/owasp-dep-scan/dosai
Speakers
avatar for Prabhu Subramanian

Prabhu Subramanian

Founder at AppThreat, Distinguished security expert and active contributor to the open-source security community
Prabhu Subramanian is a distinguished security expert and active contributor to the open-source security community. Prabhu is the author and OWASP Leader behind projects such as OWASP CycloneDX Generator (cdxgen) and OWASP depscan. He specializes in Supply Chain Security and offers... Read More →
Friday June 26, 2026 1:15pm - 3:00pm CEST
Room -2.33 (Level -2)

1:45pm CEST

Cloud Native Web Application Firewalls - How OWASP Coraza is coming to Kubernetes world
Friday June 26, 2026 1:45pm - 2:15pm CEST
Kubernetes features are moving fast, and its networking layer is constantly adapting for all new kinds of workloads. However we still lack a basic but essential feature: a way to filter and protect incoming web traffic.

The Gateway API is the natural place to add security, and many enterprises mandate such a thing. In this session, we introduce a new project that connects OWASP Coraza WAF directly with Kubernetes.

Join us to learn more on how Coraza Kubernetes Operator is proposing to bring the well known CoreRuleSet (CRS) filtering approach to Kubernetes, on a structured way, allowing cluster and gateway admins to provide traffic filtering on Gateway API and lift the security features to another level.
Speakers
avatar for Jose Carlos Chávez

Jose Carlos Chávez

Security Software Engineer, Okta
José Carlos Chávez is a Security Software Engineer at Okta, an OWASP Coraza co-leader and a Mathematics student at the University of Barcelona. He enjoys working in Security, compiling to WASM, designing APIs and building distributed systems. While not working with code, you can... Read More →
avatar for Ricardo Katz

Ricardo Katz

Software Engineer, Red Hat
Engineer on OpenShift Ingress, Gateway API & DNS area at Red Hat. Kubernetes Gateway API maintainer, working across different areas. Likes Legos, Planes, Traveling and Infrastructure-related development
Friday June 26, 2026 1:45pm - 2:15pm CEST
Room -2.82 (Level 2)

2:15pm CEST

Marketplace Takeover: One Bug Away from Pwning 10 Million Developer Machines
Friday June 26, 2026 2:15pm - 3:00pm CEST
This is the story of a single CI bug with the potential of compromising more than 10 million workstations - with a full takeover - for anyone using popular tools like Cursor and Windsurf (so every developer, really).

Learn about a critical flaw - that will be shared by the team who first identified it - in [open-vsx.org](http://open-vsx.org/), the open-source marketplace powering nearly every VSCode fork, including Cursor, Windsurf, Gitpod, StackBlitz, and Google Cloud Shell Editor.

The vulnerability sat in the project's GitHub Actions workflow, which automatically builds and publishes extensions using a privileged service token. By triggering the workflow with a crafted dependency, an attacker could run arbitrary code during npm install, exfiltrate the marketplace's OVSX_PAT token, and use it to overwrite or republish any extension in the registry. From there, the blast radius is absolute and devastating.
Any developer using a VSCode fork that auto-updates extensions would receive malicious payloads without interaction — compromising local machines, CI/CD environments, and downstream software.

This session breaks down the exploit path, the disclosure timeline, and the architectural weaknesses that made it possible. It highlights the systemic risk of ungoverned extension ecosystems and how "app store" mechanics in developer tooling have quietly become high-value attack surfaces.

But don't panic. We'll wrap with concrete mitigations like: isolating build runners from publishing credentials, auditing workflow environments for untrusted dependency execution, and implementing continuous marketplace governance to prevent similar full-ecosystem takeovers.
Speakers
avatar for Oren Yomtov

Oren Yomtov

Principal Security Researcher, Koi Security

Oren Yomtov is a Principal Security Researcher at Koi, where he focuses on advancing research in software and blockchain security. He brings extensive experience from his work at Fireblocks, contributing to research on digital asset security and blockchain infrastructure.

Previous... Read More →
avatar for Yuval Ronen

Yuval Ronen

Security Researcher, Koi Security

Yuval Ronen leads the security research at Koi, focusing on vulnerability research, threat intelligence, and developing detection methods to strengthen defenses across modern software ecosystems. He brings over seven years of experience in both offensive and defensive cybersecurity... Read More →
Friday June 26, 2026 2:15pm - 3:00pm CEST
Hall K1 (Level -2)

2:15pm CEST

Teaching AI Agents Like Guide Dogs: A Progressive Trust Framework
Friday June 26, 2026 2:15pm - 3:00pm CEST
Your AI agent has access to your database, your APIs, and your users' data. But would you give a new hire admin credentials on day one? We do this with AI agents constantly - deploying them with full system access before they've proven they won't hallucinate a DROP TABLE or leak sensitive data to a prompt injection attack.

Guide dog training programs solved this problem decades ago. They take untested puppies and transform them into autonomous agents trusted to make life-or-death decisions - through a systematic process of graduated trust. A guide dog doesn't get to navigate traffic until it's mastered basic commands. It doesn't work unsupervised until it's proven reliable across thousands of scenarios. And critically, it's trained in "intelligent disobedience" - knowing when to refuse a direct command because following it would cause harm.

In this talk, I'll introduce the Progressive Trust Framework - a practical approach to AI agent deployment inspired by 90+ years of service animal training. You'll learn how to implement graduated permission systems where agents earn expanded access through demonstrated reliability. We'll explore the "3 D's" testing methodology (Distance, Duration, Distraction) for validating agent behaviour before promotion. And we'll tackle the hardest problem: training agents that refuse harmful requests without becoming unhelpfully paranoid.

Whether you're building autonomous coding assistants, customer service bots, or internal automation tools, you'll leave with concrete patterns for deploying AI agents that earn trust instead of demanding it. Because the question isn't whether your AI agent will make mistakes - it's whether you've built the guardrails to catch them before they hit production.
Speakers
BD

Bodhisattva Das

Security Engineer, RUDRA Cybersecurity

Bodhisattva Das is a Security Engineer at Rudra Cybersecurity, focused on securing non-human identities, AI agents, and automated workloads across cloud environments. He specialises in open-source threat detection using Wazuh, and builds practical solutions for identity governance... Read More →
Friday June 26, 2026 2:15pm - 3:00pm CEST
Hall D (Level -2)

2:15pm CEST

Using CTFs as a Community of Practice Content Machine
Friday June 26, 2026 2:15pm - 3:00pm CEST
This session highlights our 6-year journey of building and sustaining a Security Community of Practice (CoP) from the ground up. We shifted from a project-centric organization with detailed, mandatory quality gates to an Agile model. This challenged us to scale and approach our self-reliant tribes in a new way. We will share which concepts worked and which were scrapped after initial trials. Additionally, we will deep dive into how we used CTFs for continuous content creation usingself developed and readily available challenges. We evolved from a manual "mail-in your solutions" approach to leveraging platforms like OWASP Juice Shop and OWASP UnCrackable Apps, creating a consistent content source and an engaging game experience for all our Security Champions.
Speakers
avatar for Marco Macala

Marco Macala

Senior Security Manager, Raiffeisen Bank International AG
Marco Macala has spent the last eight years bridging the gap between complex financial regulations and Agile product delivery. He specializes in translating rigid security requirements into actionable, realistic goals for development teams. Together with his two colleagues Florian... Read More →
avatar for Florian Schier

Florian Schier

Security Manager, RBI

Florian focuses on the human side of security, acting as an enabler for teams rather than a traditional gatekeeper. He specializes in translating dense security requirements into practical, day-to-day wins that actually work in an Agile environment.

He is dedicated to building a security collective that breaks down silos and makes cybersecurity accessible to everyone. When he isn't helping teams strengthen their security posture, he’s focused on fostering collaborative environments where security and DevOps actually speak the... Read More →
avatar for Christian Buchinger

Christian Buchinger

Senior Security Manager

Christian collects real accomplishments, strong coffee, and an irrational hatred for the words “delivery,” “dedication,” and “great team” used as emotional support for mediocrity.

- Job: Senior Security Manager in a large European banking group
- Role: Professional doer... Read More →
Friday June 26, 2026 2:15pm - 3:00pm CEST
Hall K2 (Level -2)

2:15pm CEST

Trust No History: Why Every "Remembered" Interaction is a Potential Backdoor
Friday June 26, 2026 2:15pm - 3:00pm CEST
As AI transitions from stateless tools to autonomous agents, the context window has become the primary attack surface. By giving agents the ability to remember, summarize, and collaborate, we have created a machine that can be gaslit. This session moves beyond transient prompt injections into the realm of persistent memory corruption. We explore how an adversary can rewrite an agent’s history, bias its knowledge base, and plant sleeper instructions that trigger long after the initial interaction. We will dissect the systematic subversion of the agentic memory stack and demonstrate why developers must stop treating agent memory as a passive data store and start defending it as the engine of the agent’s survival
Speakers
avatar for Rico Komenda

Rico Komenda

Senior Security Consultant

Rico is a senior product security engineer. His main security areas are in application security, cloud security, offensive security and AI security.

For him, general security intelligence in various aspects is a top priority. Today’s security world is constantly changing and you... Read More →
avatar for Barno Kaharova

Barno Kaharova

Senior Consultant, AI Security Expert, adesso SE

Barno is a expert specializing in data engineering, data modeling, and machine learning security. Driven by a passion for innovation, she develops cutting-edge methodologies to protect AI systems from adversarial threats, pushing the boundaries of what’s possible in AI security... Read More →
Friday June 26, 2026 2:15pm - 3:00pm CEST
Hall G2 (Level -2)

3:15pm CEST

Hack Your Own Dockerfiles (Before Someone Else Does): Hands-On Container Security with OWASP DockSec (Workshop)
Friday June 26, 2026 3:15pm - 4:15pm CEST
Most teams don’t have a "container security problem." They have a "Dockerfile hygiene" problem that quietly becomes a supply chain problem. Dockerfiles are often treated as simple build instructions, but in practice they introduce real security risk. Even teams with mature AppSec programs regularly ship Dockerfiles that run as root, rely on untrusted base images, or hide supply-chain risks inside multi-stage builds. Scanners catch many of these issues, yet the same mistakes keep showing up.

In this talk I will share lessons learned from building and using DockSec, an open-source Dockerfile security analysis tool adopted by OWASP, in real development pipelines. The focus is not on introducing a new scanner, but on understanding why Dockerfile issues persist and what actually helps developers fix them.

Using real examples from production pipelines, I’ll walk through common Dockerfile patterns that lead to security problems and explain how those risks translate into real attack paths. I’ll also discuss what worked, and what didn’t, when trying to integrate Dockerfile security checks into CI/CD without slowing teams down or turning security into a constant blocker. I will also cover what "good" looks like in CI: turning findings into developer-friendly feedback, using policy gates sparingly (and correctly), and keeping scan noise under control.

This is not a product demo or a sales talk. It’s a practical discussion about Dockerfile security, developer behavior, and how AppSec teams can reduce repeat mistakes using clearer feedback, better explanations, and OWASP-aligned guidance. Attendees should leave with concrete ideas they can apply immediately, even if they never use DockSec.
Speakers
avatar for Advait Patel

Advait Patel

Senior Site Reliability Engineer, Broadcom
Advait Patel is a Senior Site Reliability Engineer at Broadcom and the creator of DockSec, an open-source, AI-powered Docker security analyzer. With over 8+ years of experience in cloud-native security, DevSecOps, and secure software supply chains, he is passionate about building... Read More →
Friday June 26, 2026 3:15pm - 4:15pm CEST
Room -2.33 (Level -2)

3:30pm CEST

From Safety to Policy: Enforcing Organizational Rules in LLMs and AI Agents
Friday June 26, 2026 3:30pm - 4:15pm CEST
Organizations deploying GenAI systems quickly discover that safety controls do not automatically enforce organizational policies. Real environments operate under large and evolving sets of domains, organization-specific and external policies driven by legal requirements, industry regulations, and internal governance rules, and they change periodically. Enforcing these rules in production is not a one-time setup problem; it is a continuous governance and operations challenge.

Existing guardrail solutions are not designed to handle custom, large-scale, and continuously evolving organizational policies. When AI agent developers or AI security teams attempt to stretch these safety-oriented systems into general policy enforcement, their underlying design assumptions no longer hold because they assume a small, static policy space rather than a broad and heterogeneous one. Static rules such as regex become unmaintainable and produce unreliable detection at scale, fine-tuned classifiers require constant retraining, and LLM-as-a-judge pipelines, even when carefully calibrated, are expensive to run, introduce non-trivial latency and are difficult to audit.

This talk describes how we stress-tested existing compliance approaches, including static guardrails, fine-tuned detectors, and LLM-as-a-judge pipelines, and analyzed how they degrade under realistic policy complexity.
We present a reframing of the problem: instead of relying solely on output-level judgments, policy violations can also be detected directly in the model’s internal space with a training-free approach. We explain what this shift enables in practice, including continuous compliance monitoring, policy updates without retraining loops, and improved auditability. We also discuss the limitations of this advanced approach.

We also address a deeper conceptual issue that emerged from our error analysis: in practice, the boundary between “policies” and “instructions” is often unclear, and treating instructions as if they were policies leads to confusing and brittle failure modes. Today, both alignment boundaries and performance or business objectives are commonly expressed using the same mechanism—rules or instructions—blurring fundamentally different concerns under a single notion of “policy.” This separation is critical: some instructions define organizational and alignment constraints, while others encode task goals and performance requirements. Conflating these concepts results in misaligned controls, as they require different enforcement strategies and, in many cases, different ownership and roles within the organization.

The goal of this talk is to provide AppSec and GRC teams with a clearer mental model for operating LLM policy compliance in production, a checklist of questions to ask about existing guardrail solutions, and a better understanding of what it actually takes to keep LLM systems compliant over time.
Speakers
avatar for Oren Rachmil

Oren Rachmil

Senior AI Researcher,, Fujitsu Research of Europe

Oren Rachmil is a Senior AI Researcher at Fujitsu Research of Europe, working on the safety, evaluation, and security of large language model systems. His recent research focuses on analyzing gaps in open-source LLM vulnerability scanners, understanding evaluator reliability, and... Read More →
Friday June 26, 2026 3:30pm - 4:15pm CEST
Hall K1 (Level -2)

3:30pm CEST

The TPM and You - How (and why) to actually make use of your TPM
Friday June 26, 2026 3:30pm - 4:15pm CEST
There is a common saying that "every problem in cryptography can be reduced to key management problem". OWASP's Cheat Sheet series even has a whole document dedicated to "Cryptographic Storage". What if we could make life easier for us in this area?

TPMs (Trusted Platform Modules) have been a fixed part of every standard PC for many years, providing all users with a "free" hardware that can be used for all kinds of cryptography.
They are already widely in use by most operating systems and firmwares, but haven't really found usage for userspace applications yet.

This talk elaborates why this is the case and how to change this fact. We are going to discuss the capabilities of a TPM and demonstrate them live with a sample application, a TOTP client which stores its secrets securely.
Speakers
avatar for Mathias Tausig

Mathias Tausig

Senior Security Consultant, SBA Research

* Graduated in mathematics
* Holistic perspective on computers: former developer, sysadmin, security officer, university teacher and even computer salesman
* Now a security consultant specializing in application security
* Open source lover
* Chapter Lead from OWASP Vienna    sba-... Read More →
Friday June 26, 2026 3:30pm - 4:15pm CEST
Hall G1 (Level -2)

3:30pm CEST

Insecurity as Code: How Modern Software Scaled the Attack Surface
Friday June 26, 2026 3:30pm - 4:15pm CEST
Drawing on large-scale telemetry from real-world production environments, this talk examines what modern application and supply-chain security actually look like in 2025–2026. The data paints a clear picture: many organizations ship vulnerable dependencies, exposed secrets remain surprisingly common, infrastructure logging is frequently incomplete, and malicious packages can reach production environments.

We’ll connect these observations to recent supply-chain incidents, from SolarWinds to self-replicating npm worms, and explore why vulnerabilities often persist long after disclosure. More importantly, we’ll discuss which security controls measurably reduce risk in practice, and which tend to generate noise without improving outcomes.

This talk focuses on the gap between defensive effort and attacker leverage - where defenders lose time, and where attackers gain scale.
Speakers
avatar for Igor Stepansky

Igor Stepansky

Security Researcher, Orca Security

I'm Igor Stepansky, a Security Researcher at Orca Security specializing in the AppSec domain. I bring a strong and diverse background in cybersecurity, with hands-on experience in integrating security solutions such as SAST, IaC scanning, SCA, secrets detection, and malicious package... Read More →
Friday June 26, 2026 3:30pm - 4:15pm CEST
Hall K2 (Level -2)
 
Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.