Loading…
Audience: Intermediate clear filter
arrow_back View All Dates
Friday, June 26
 

10:30am CEST

Your Localhost Is Lying to You: Trust Boundary Failures in Enterprise SSO
Friday June 26, 2026 10:30am - 11:15am CEST
When an attacker lands on a user’s machine, your SSO should not hand them the keys to your network. Yet many enterprise systems do because they assume localhost subdomains are safe. They are not.

This talk shows how a common DNS misconfiguration (localhost.target.com → 127.0.0.1), combined with domain-wide cookies (Domain=.target.com), allows a locally executed request context to inherit an authenticated session. No XSS. No phishing. Just browser-native behavior.

This flaw is rarely detected by scanners or standard penetration tests, yet it appears in real enterprise deployments today. The session presents a practical testing methodology, a defensive checklist, and research-based validation techniques to assess this class of trust boundary failure safely.

Attendees will leave able to identify and fix this issue in their own SSO deployments next week.
Speakers
avatar for Rupesh Kumar

Rupesh Kumar

Application Security Researcher | Red Team Practitioner

Rupesh Kumar is an offensive security researcher with 1.5 years of experience in web application testing, vulnerability research, and red team operations. He has reported critical and high-severity vulnerabilities to organizations across government, defense, healthcare, and critical... Read More →
Friday June 26, 2026 10:30am - 11:15am CEST
Hall G2 (Level -2)

11:30am CEST

Infrastructure Doesn’t Lie: Using Infrastructure Signals to Detect Shadow AI Built Applications
Friday June 26, 2026 11:30am - 12:15pm CEST
AI app builders now enable production apps to ship without repositories, CI/CD, or security review, often by non-traditional developers outside established engineering workflows. These Shadow AI apps bypass AppSec pipelines and governance, creating a growing blind spot in enterprise environments. This talk demonstrates how DNS, TLS, and hosting signals can detect shadow AI apps that existing controls miss.
Speakers
avatar for Balachandra Shanabhag

Balachandra Shanabhag

Product Security Lead, Cerebras

Bala is working as Staff security Engineer for Cohesity. Bala has over 15 years of experience in various domains of cybersecurity. Bala Joined Cohesity as Founding Product Security Engineer and helped boot strap Appsec and other security initiatives. Before Cohesity Bala worked at... Read More →
Friday June 26, 2026 11:30am - 12:15pm CEST
Hall K1 (Level -2)

11:30am CEST

Phishing for Passkeys - An Analysis of WebAuthn and CTAP
Friday June 26, 2026 11:30am - 12:15pm CEST
WebAuthn was supposed to replace passwords on the web: uniform, secure, manageable authentication for everyone! One of its unique selling points was supposed to be the impossibility of phishing attacks. When Passkeys were introduced, some of WebAuthn's security principles were watered down in order to achieve some usability improvements and thus reach more widespread adoption.

This presentation discusses the security of Passkeys against phishing attacks. It explains the possibilities for an attacker to gain access to accounts secured with Passkeys using spear phishing, and what conditions must be met for this to happen. It also practically demonstrates such an attack and discusses countermeasures.

Participants will learn which WebAuthn security principles still apply to Passkeys and which do not. They will learn why Passkeys are no longer completely phishing-proof and how they can evaluate this consideration for their own use of Passkeys.
Speakers
avatar for Michael Kuckuk

Michael Kuckuk

Fullstack Developer, inovex

As a fullstack software developer, Michael's main expertise lies in simple software development. But since he is well aware that the happy path is the easy part, he's always had an interest for security and he's always been very security- and privacy-aware in his work. He enjoys developing... Read More →
Friday June 26, 2026 11:30am - 12:15pm CEST
Hall D (Level -2)

11:30am CEST

Effort is All You Need: Testing LLM Applications in the Real World
Friday June 26, 2026 11:30am - 12:15pm CEST
Security testing of GenAI systems is often reduced to "LLM red teaming": probing a model in isolation to see what unsafe/offensive content it will generate. In practice, this approach falls short. As security practitioners, we need to assess complete LLM application use cases, focusing on how inputs and outputs propagate through application logic and enable concrete security risks such as data exfiltration, cross-site scripting, and authorization bypass.

In this talk, we share practical experience and supporting open-source tooling we developed for assessing LLM applications. These focus on testing systems where the LLM is embedded in application logic rather than exposed as a simple inference endpoint.

It covers approaches for testing non-conversational GenAI workflows, WebSockets, and custom APIs; building scoped prompt injection datasets aligned with application logic and engagement constraints; applying effort-based jailbreak techniques (e.g. anti-spotlighting, best-of-n, crescendo, ...) to evaluate guardrail robustness and demonstrate practical bypasses; and conducting meaningful testing in isolated or air-gapped environments.

Speakers
avatar for Donato Capitella

Donato Capitella

Principal Security Consultant, Reversec

Donato Capitella is a Software Engineer and Principal Security Consultant at Reversec, with over 15 years of experience in offensive security and software engineering. Donato spent the past 3 years conducting research and assessments on Generative AI applications, covering topics... Read More →
avatar for Thomas Cross

Thomas Cross

Security Consultant, Reversec

Friday June 26, 2026 11:30am - 12:15pm CEST
Hall G2 (Level -2)

1:15pm CEST

The OG OWASP Top 10 Might Be Back Thanks to Agentic Browsers
Friday June 26, 2026 1:15pm - 2:00pm CEST
Agentic browsers are quickly becoming one of the most powerful—yet dangerous—applications of agentic AI. By combining web navigation, content interpretation, and direct action taking, they act as a universal gateway to almost any service or application on the internet.

That power quietly reintroduces web security risks many teams assumed were behind us. Agentic browsers read and react to untrusted web content, follow instructions embedded in pages, images, and hidden text, and then execute actions inside real sessions.

The result is that classic web attack patterns made popular 20+ years ago when the first OWASP Top 10 was introduced may be back.

Things like injection manipulations, cross-site scripting payload delivery, CSRF-style action abuse, broken access control, and cross-origin boundary failures—now executed by autonomous agents instead of users.

This talk examines why current agentic browser designs break core web security assumptions around origins, cookies, and session boundaries, and why common mitigations such as human-in-the-loop controls introduce friction and fatigue without solving the underlying problem. We'll argue that unrestricted multi-site agents are fundamentally unsafe, and share better approaches based on domain-scoped agents, strict isolation, and secure multi-agent orchestration.
Speakers
avatar for Lidan Hazout

Lidan Hazout

CTO and Co-Founder, Capsule Security

Lidan has been programming since childhood, driven by a deep passion for data and AI. He previously served as VP of R&D at SecuredTouch, where he helped pioneer behavioral biometrics. Following the company’s acquisition by Ping Identity, the technology he led became a core component... Read More →
avatar for Bar Kaduri

Bar Kaduri

Head of Research, Capsule Security

Bar Kaduri is a cybersecurity researcher, leader, and international speaker with over 14 years of experience in cloud security, software supply-chain risk, and emerging AI threats. With hands-on expertise in evaluating and stress-testing AI systems, Bar focuses on building practical... Read More →
Friday June 26, 2026 1:15pm - 2:00pm CEST
Hall G1 (Level -2)

1:15pm CEST

AI-Generated Code vs Human Code. Who Really Writes More Vulnerabilities
Friday June 26, 2026 1:15pm - 2:00pm CEST
When AI coding tools entered mainstream development, the application security community reacted fast and loudly. Many warned that AI would dramatically increase vulnerabilities. The most common argument was simple and intuitive. AI models were trained on vast amounts of real-world code, including insecure and vulnerable code. Garbage in, garbage out. If AI learned from vulnerable code, it would inevitably reproduce those vulnerabilities at scale.

This claim quickly became accepted wisdom, despite the fact that almost no one could actually prove it.

This session presents a data-driven examination of that assumption. By correlating reported security vulnerabilities with automated line-level code attribution, we were able to determine whether a vulnerability originated in AI-generated code or human-written code. This allowed us to move the discussion from fear and intuition to measurable evidence.

The results are more nuanced and more interesting than the prevailing narrative suggests. In some scenarios, AI-generated code showed higher vulnerability density. In others, it performed comparably to, or even better than, human-written code. The differences are not accidental. They correlate strongly with the model used, the tooling, and how developers interact with AI, rather than AI usage alone.

This talk challenges the notion that AI coding is inherently insecure. It replaces the garbage-in, garbage-out argument with concrete data, identifies where the real risks actually emerge, and explains what this means for modern AppSec strategy. Attendees will leave with evidence they can use to recalibrate policies, controls, and conversations around AI-assisted development, without slowing teams down or relying on assumptions.
Speakers
avatar for Eitan Worcel

Eitan Worcel

CEO & Co-Founder, Mobb

Eitan Worcel is the co-founder and CEO of Mobb. He has close to 20 years of experience in application security, spanning hands-on software development, product leadership, and executive roles. Throughout his career, Eitan has worked closely with engineering and security teams to understand... Read More →
Friday June 26, 2026 1:15pm - 2:00pm CEST
Hall D (Level -2)

1:15pm CEST

What Our Pen Tests Never Found — And How Attackers Did
Friday June 26, 2026 1:15pm - 2:00pm CEST
Penetration testing is a crucial part of application security practices, yet attackers often succeed in ways no test ever reported. No injection, no memory corruption, no failed authentication. The applications behaved exactly as designed — and that was enough.

In this talk, we will explore what penetrating testing is intended to detect and how attackers actually compromise the systems. This talk will address why well-scoped penetration testing frequently revealed "no critical findings" while attackers later leveraged legitimate workflows, permission assumptions, and trust boundaries to cause serious harm.
Based on real world examples and post incident analysis, this talk will walk through security issues that were frequently overlooked during testing, not because testers lacked skills, but because the testing process made assumptions that attackers did not follow. We will focus on examining the blind spots in the penetration testing process, which include behaviors that only appear in production, cross-feature chaining, abuse of business logic, and trust assumptions built into system architecture.

The objective of this talk will be to comprehend where pen testing ends and how defenders might modify their testing tactics accordingly, rather than to replace it. This talk will break down the classes of issues pen tests routinely miss, how attackers discover them post-deployment, and what changed when testing strategies shifted from endpoint coverage to adversary-aware validation.

Attendees will leave with practical techniques to evolve their AppSec testing without increasing cost or abandoning penetration testing.
Speakers
avatar for Ramya M

Ramya M

Application Analyst, Okta, Inc,

Ramya M is a cybersecurity professional, currently working at Okta, Inc., specializing in application security, product security, identity security, and secure SDLC automation. She has led enterprise-scale initiatives across secure coding, DevSecOps hardening, vulnerability triage... Read More →
Friday June 26, 2026 1:15pm - 2:00pm CEST
Hall G2 (Level -2)

1:15pm CEST

Finding strange things in binaries (Workshop)
Friday June 26, 2026 1:15pm - 3:00pm CEST
OWASP Demo Lab - Hands-On Workshop / Small Group Session
Zone 1

Internal development teams and external suppliers love producing binaries for ease of deployment and distribution. Binary formats, however, make security analysis and compliance more complex for the security and OSPO teams. The good news is that the team behind OWASP dep-scan maintains a couple of binary analysis tools (OWASP blint and OWASP dosai). We show how these two tools can help defenders find strange things in binaries and help with your software transparency journey.

The session will be technical showcasing blint and dosai to analyse complex binaries to identify capabilities, risks, and threats. Users can walk away with new knowledge about modern techniques related to binary SBOM generation, Source line to Assembly instruction mapping, security capabilities analysis, and more.

https://github.com/owasp-dep-scan/blint
https://github.com/owasp-dep-scan/dosai
Speakers
avatar for Prabhu Subramanian

Prabhu Subramanian

Founder at AppThreat, Distinguished security expert and active contributor to the open-source security community
Prabhu Subramanian is a distinguished security expert and active contributor to the open-source security community. Prabhu is the author and OWASP Leader behind projects such as OWASP CycloneDX Generator (cdxgen) and OWASP depscan. He specializes in Supply Chain Security and offers... Read More →
Friday June 26, 2026 1:15pm - 3:00pm CEST
Room -2.33 (Level -2)

1:45pm CEST

Cloud Native Web Application Firewalls - How OWASP Coraza is coming to Kubernetes world
Friday June 26, 2026 1:45pm - 2:15pm CEST
Kubernetes features are moving fast, and its networking layer is constantly adapting for all new kinds of workloads. However we still lack a basic but essential feature: a way to filter and protect incoming web traffic.

The Gateway API is the natural place to add security, and many enterprises mandate such a thing. In this session, we introduce a new project that connects OWASP Coraza WAF directly with Kubernetes.

Join us to learn more on how Coraza Kubernetes Operator is proposing to bring the well known CoreRuleSet (CRS) filtering approach to Kubernetes, on a structured way, allowing cluster and gateway admins to provide traffic filtering on Gateway API and lift the security features to another level.
Speakers
avatar for Jose Carlos Chávez

Jose Carlos Chávez

Security Software Engineer, Okta
José Carlos Chávez is a Security Software Engineer at Okta, an OWASP Coraza co-leader and a Mathematics student at the University of Barcelona. He enjoys working in Security, compiling to WASM, designing APIs and building distributed systems. While not working with code, you can... Read More →
avatar for Ricardo Katz

Ricardo Katz

Software Engineer, Red Hat
Engineer on OpenShift Ingress, Gateway API & DNS area at Red Hat. Kubernetes Gateway API maintainer, working across different areas. Likes Legos, Planes, Traveling and Infrastructure-related development
Friday June 26, 2026 1:45pm - 2:15pm CEST
Room -2.82 (Level 2)

2:15pm CEST

Marketplace Takeover: One Bug Away from Pwning 10 Million Developer Machines
Friday June 26, 2026 2:15pm - 3:00pm CEST
This is the story of a single CI bug with the potential of compromising more than 10 million workstations - with a full takeover - for anyone using popular tools like Cursor and Windsurf (so every developer, really).

Learn about a critical flaw - that will be shared by the team who first identified it - in [open-vsx.org](http://open-vsx.org/), the open-source marketplace powering nearly every VSCode fork, including Cursor, Windsurf, Gitpod, StackBlitz, and Google Cloud Shell Editor.

The vulnerability sat in the project's GitHub Actions workflow, which automatically builds and publishes extensions using a privileged service token. By triggering the workflow with a crafted dependency, an attacker could run arbitrary code during npm install, exfiltrate the marketplace's OVSX_PAT token, and use it to overwrite or republish any extension in the registry. From there, the blast radius is absolute and devastating.
Any developer using a VSCode fork that auto-updates extensions would receive malicious payloads without interaction — compromising local machines, CI/CD environments, and downstream software.

This session breaks down the exploit path, the disclosure timeline, and the architectural weaknesses that made it possible. It highlights the systemic risk of ungoverned extension ecosystems and how "app store" mechanics in developer tooling have quietly become high-value attack surfaces.

But don't panic. We'll wrap with concrete mitigations like: isolating build runners from publishing credentials, auditing workflow environments for untrusted dependency execution, and implementing continuous marketplace governance to prevent similar full-ecosystem takeovers.
Speakers
avatar for Oren Yomtov

Oren Yomtov

Principal Security Researcher, Koi Security

Oren Yomtov is a Principal Security Researcher at Koi, where he focuses on advancing research in software and blockchain security. He brings extensive experience from his work at Fireblocks, contributing to research on digital asset security and blockchain infrastructure.

Previous... Read More →
avatar for Yuval Ronen

Yuval Ronen

Security Researcher, Koi Security

Yuval Ronen leads the security research at Koi, focusing on vulnerability research, threat intelligence, and developing detection methods to strengthen defenses across modern software ecosystems. He brings over seven years of experience in both offensive and defensive cybersecurity... Read More →
Friday June 26, 2026 2:15pm - 3:00pm CEST
Hall K1 (Level -2)

2:15pm CEST

Teaching AI Agents Like Guide Dogs: A Progressive Trust Framework
Friday June 26, 2026 2:15pm - 3:00pm CEST
Your AI agent has access to your database, your APIs, and your users' data. But would you give a new hire admin credentials on day one? We do this with AI agents constantly - deploying them with full system access before they've proven they won't hallucinate a DROP TABLE or leak sensitive data to a prompt injection attack.

Guide dog training programs solved this problem decades ago. They take untested puppies and transform them into autonomous agents trusted to make life-or-death decisions - through a systematic process of graduated trust. A guide dog doesn't get to navigate traffic until it's mastered basic commands. It doesn't work unsupervised until it's proven reliable across thousands of scenarios. And critically, it's trained in "intelligent disobedience" - knowing when to refuse a direct command because following it would cause harm.

In this talk, I'll introduce the Progressive Trust Framework - a practical approach to AI agent deployment inspired by 90+ years of service animal training. You'll learn how to implement graduated permission systems where agents earn expanded access through demonstrated reliability. We'll explore the "3 D's" testing methodology (Distance, Duration, Distraction) for validating agent behaviour before promotion. And we'll tackle the hardest problem: training agents that refuse harmful requests without becoming unhelpfully paranoid.

Whether you're building autonomous coding assistants, customer service bots, or internal automation tools, you'll leave with concrete patterns for deploying AI agents that earn trust instead of demanding it. Because the question isn't whether your AI agent will make mistakes - it's whether you've built the guardrails to catch them before they hit production.
Speakers
BD

Bodhisattva Das

Security Engineer, RUDRA Cybersecurity

Bodhisattva Das is a Security Engineer at Rudra Cybersecurity, focused on securing non-human identities, AI agents, and automated workloads across cloud environments. He specialises in open-source threat detection using Wazuh, and builds practical solutions for identity governance... Read More →
Friday June 26, 2026 2:15pm - 3:00pm CEST
Hall D (Level -2)

2:15pm CEST

Using CTFs as a Community of Practice Content Machine
Friday June 26, 2026 2:15pm - 3:00pm CEST
This session highlights our 6-year journey of building and sustaining a Security Community of Practice (CoP) from the ground up. We shifted from a project-centric organization with detailed, mandatory quality gates to an Agile model. This challenged us to scale and approach our self-reliant tribes in a new way. We will share which concepts worked and which were scrapped after initial trials. Additionally, we will deep dive into how we used CTFs for continuous content creation usingself developed and readily available challenges. We evolved from a manual "mail-in your solutions" approach to leveraging platforms like OWASP Juice Shop and OWASP UnCrackable Apps, creating a consistent content source and an engaging game experience for all our Security Champions.
Speakers
avatar for Marco Macala

Marco Macala

Senior Security Manager, Raiffeisen Bank International AG
Marco Macala has spent the last eight years bridging the gap between complex financial regulations and Agile product delivery. He specializes in translating rigid security requirements into actionable, realistic goals for development teams. Together with his two colleagues Florian... Read More →
avatar for Florian Schier

Florian Schier

Security Manager, RBI

Florian focuses on the human side of security, acting as an enabler for teams rather than a traditional gatekeeper. He specializes in translating dense security requirements into practical, day-to-day wins that actually work in an Agile environment.

He is dedicated to building a security collective that breaks down silos and makes cybersecurity accessible to everyone. When he isn't helping teams strengthen their security posture, he’s focused on fostering collaborative environments where security and DevOps actually speak the... Read More →
avatar for Christian Buchinger

Christian Buchinger

Senior Security Manager

Christian collects real accomplishments, strong coffee, and an irrational hatred for the words “delivery,” “dedication,” and “great team” used as emotional support for mediocrity.

- Job: Senior Security Manager in a large European banking group
- Role: Professional doer... Read More →
Friday June 26, 2026 2:15pm - 3:00pm CEST
Hall K2 (Level -2)

2:15pm CEST

Trust No History: Why Every "Remembered" Interaction is a Potential Backdoor
Friday June 26, 2026 2:15pm - 3:00pm CEST
As AI transitions from stateless tools to autonomous agents, the context window has become the primary attack surface. By giving agents the ability to remember, summarize, and collaborate, we have created a machine that can be gaslit. This session moves beyond transient prompt injections into the realm of persistent memory corruption. We explore how an adversary can rewrite an agent’s history, bias its knowledge base, and plant sleeper instructions that trigger long after the initial interaction. We will dissect the systematic subversion of the agentic memory stack and demonstrate why developers must stop treating agent memory as a passive data store and start defending it as the engine of the agent’s survival
Speakers
avatar for Rico Komenda

Rico Komenda

Senior Security Consultant

Rico is a senior product security engineer. His main security areas are in application security, cloud security, offensive security and AI security.

For him, general security intelligence in various aspects is a top priority. Today’s security world is constantly changing and you... Read More →
avatar for Barno Kaharova

Barno Kaharova

Senior Consultant, AI Security Expert, adesso SE

Barno is a expert specializing in data engineering, data modeling, and machine learning security. Driven by a passion for innovation, she develops cutting-edge methodologies to protect AI systems from adversarial threats, pushing the boundaries of what’s possible in AI security... Read More →
Friday June 26, 2026 2:15pm - 3:00pm CEST
Hall G2 (Level -2)

3:15pm CEST

Hack Your Own Dockerfiles (Before Someone Else Does): Hands-On Container Security with OWASP DockSec (Workshop)
Friday June 26, 2026 3:15pm - 4:15pm CEST
Most teams don’t have a "container security problem." They have a "Dockerfile hygiene" problem that quietly becomes a supply chain problem. Dockerfiles are often treated as simple build instructions, but in practice they introduce real security risk. Even teams with mature AppSec programs regularly ship Dockerfiles that run as root, rely on untrusted base images, or hide supply-chain risks inside multi-stage builds. Scanners catch many of these issues, yet the same mistakes keep showing up.

In this talk I will share lessons learned from building and using DockSec, an open-source Dockerfile security analysis tool adopted by OWASP, in real development pipelines. The focus is not on introducing a new scanner, but on understanding why Dockerfile issues persist and what actually helps developers fix them.

Using real examples from production pipelines, I’ll walk through common Dockerfile patterns that lead to security problems and explain how those risks translate into real attack paths. I’ll also discuss what worked, and what didn’t, when trying to integrate Dockerfile security checks into CI/CD without slowing teams down or turning security into a constant blocker. I will also cover what "good" looks like in CI: turning findings into developer-friendly feedback, using policy gates sparingly (and correctly), and keeping scan noise under control.

This is not a product demo or a sales talk. It’s a practical discussion about Dockerfile security, developer behavior, and how AppSec teams can reduce repeat mistakes using clearer feedback, better explanations, and OWASP-aligned guidance. Attendees should leave with concrete ideas they can apply immediately, even if they never use DockSec.
Speakers
avatar for Advait Patel

Advait Patel

Senior Site Reliability Engineer, Broadcom
Advait Patel is a Senior Site Reliability Engineer at Broadcom and the creator of DockSec, an open-source, AI-powered Docker security analyzer. With over 8+ years of experience in cloud-native security, DevSecOps, and secure software supply chains, he is passionate about building... Read More →
Friday June 26, 2026 3:15pm - 4:15pm CEST
Room -2.33 (Level -2)

3:30pm CEST

From Safety to Policy: Enforcing Organizational Rules in LLMs and AI Agents
Friday June 26, 2026 3:30pm - 4:15pm CEST
Organizations deploying GenAI systems quickly discover that safety controls do not automatically enforce organizational policies. Real environments operate under large and evolving sets of domains, organization-specific and external policies driven by legal requirements, industry regulations, and internal governance rules, and they change periodically. Enforcing these rules in production is not a one-time setup problem; it is a continuous governance and operations challenge.

Existing guardrail solutions are not designed to handle custom, large-scale, and continuously evolving organizational policies. When AI agent developers or AI security teams attempt to stretch these safety-oriented systems into general policy enforcement, their underlying design assumptions no longer hold because they assume a small, static policy space rather than a broad and heterogeneous one. Static rules such as regex become unmaintainable and produce unreliable detection at scale, fine-tuned classifiers require constant retraining, and LLM-as-a-judge pipelines, even when carefully calibrated, are expensive to run, introduce non-trivial latency and are difficult to audit.

This talk describes how we stress-tested existing compliance approaches, including static guardrails, fine-tuned detectors, and LLM-as-a-judge pipelines, and analyzed how they degrade under realistic policy complexity.
We present a reframing of the problem: instead of relying solely on output-level judgments, policy violations can also be detected directly in the model’s internal space with a training-free approach. We explain what this shift enables in practice, including continuous compliance monitoring, policy updates without retraining loops, and improved auditability. We also discuss the limitations of this advanced approach.

We also address a deeper conceptual issue that emerged from our error analysis: in practice, the boundary between “policies” and “instructions” is often unclear, and treating instructions as if they were policies leads to confusing and brittle failure modes. Today, both alignment boundaries and performance or business objectives are commonly expressed using the same mechanism—rules or instructions—blurring fundamentally different concerns under a single notion of “policy.” This separation is critical: some instructions define organizational and alignment constraints, while others encode task goals and performance requirements. Conflating these concepts results in misaligned controls, as they require different enforcement strategies and, in many cases, different ownership and roles within the organization.

The goal of this talk is to provide AppSec and GRC teams with a clearer mental model for operating LLM policy compliance in production, a checklist of questions to ask about existing guardrail solutions, and a better understanding of what it actually takes to keep LLM systems compliant over time.
Speakers
avatar for Oren Rachmil

Oren Rachmil

Senior AI Researcher,, Fujitsu Research of Europe

Oren Rachmil is a Senior AI Researcher at Fujitsu Research of Europe, working on the safety, evaluation, and security of large language model systems. His recent research focuses on analyzing gaps in open-source LLM vulnerability scanners, understanding evaluator reliability, and... Read More →
Friday June 26, 2026 3:30pm - 4:15pm CEST
Hall K1 (Level -2)

3:30pm CEST

The TPM and You - How (and why) to actually make use of your TPM
Friday June 26, 2026 3:30pm - 4:15pm CEST
There is a common saying that "every problem in cryptography can be reduced to key management problem". OWASP's Cheat Sheet series even has a whole document dedicated to "Cryptographic Storage". What if we could make life easier for us in this area?

TPMs (Trusted Platform Modules) have been a fixed part of every standard PC for many years, providing all users with a "free" hardware that can be used for all kinds of cryptography.
They are already widely in use by most operating systems and firmwares, but haven't really found usage for userspace applications yet.

This talk elaborates why this is the case and how to change this fact. We are going to discuss the capabilities of a TPM and demonstrate them live with a sample application, a TOTP client which stores its secrets securely.
Speakers
avatar for Mathias Tausig

Mathias Tausig

Senior Security Consultant, SBA Research

* Graduated in mathematics
* Holistic perspective on computers: former developer, sysadmin, security officer, university teacher and even computer salesman
* Now a security consultant specializing in application security
* Open source lover
* Chapter Lead from OWASP Vienna    sba-... Read More →
Friday June 26, 2026 3:30pm - 4:15pm CEST
Hall G1 (Level -2)

3:30pm CEST

Insecurity as Code: How Modern Software Scaled the Attack Surface
Friday June 26, 2026 3:30pm - 4:15pm CEST
Drawing on large-scale telemetry from real-world production environments, this talk examines what modern application and supply-chain security actually look like in 2025–2026. The data paints a clear picture: many organizations ship vulnerable dependencies, exposed secrets remain surprisingly common, infrastructure logging is frequently incomplete, and malicious packages can reach production environments.

We’ll connect these observations to recent supply-chain incidents, from SolarWinds to self-replicating npm worms, and explore why vulnerabilities often persist long after disclosure. More importantly, we’ll discuss which security controls measurably reduce risk in practice, and which tend to generate noise without improving outcomes.

This talk focuses on the gap between defensive effort and attacker leverage - where defenders lose time, and where attackers gain scale.
Speakers
avatar for Igor Stepansky

Igor Stepansky

Security Researcher, Orca Security

I'm Igor Stepansky, a Security Researcher at Orca Security specializing in the AppSec domain. I bring a strong and diverse background in cybersecurity, with hands-on experience in integrating security solutions such as SAST, IaC scanning, SCA, secrets detection, and malicious package... Read More →
Friday June 26, 2026 3:30pm - 4:15pm CEST
Hall K2 (Level -2)
 
Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.
Filtered by Date -