Loading…
Audience: Advanced clear filter
arrow_back View All Dates
Thursday, June 25
 

10:30am CEST

Why AppSec Fails at Scale (and How to Fix It)
Thursday June 25, 2026 10:30am - 11:15am CEST
As organizations grow, application security often becomes more painful but not more effective. Vulnerabilities recur, engineers feel blocked, and security teams struggle to scale. These failures are rarely caused by careless engineers or missing tools — they are symptoms of broken systems.

In this talk, we examine why AppSec fails to scale, particularly in growing teams and startups, and why adding more guidelines, scanners, or training usually makes the problem worse. Instead, let's approach application security as a sociotechnical system shaped by incentives, defaults, ownership boundaries, and feedback loops.

In this session, you will hear about common failure modes such as compliance-driven security, misplaced responsibility, and metrics that reward activity instead of risk reduction. Then hear about practical strategies for fixing the system: shifting security into platforms and defaults, reducing cognitive load for engineers, and aligning AppSec goals with delivery pressure and business constraints.
Speakers
avatar for Eduard Thamm

Eduard Thamm


Eduard is a technical leader with a background in distributed systems, platform engineering, and security. He works in regulated environments, designing Kubernetes-based platforms where reliability, compliance, and developer experience must coexist. His focus is on architecture under... Read More →
Thursday June 25, 2026 10:30am - 11:15am CEST
Hall K2 (Level -2)

10:30am CEST

Scanning Agentic AI Systems: Beyond Traditional LLM Red Teaming
Thursday June 25, 2026 10:30am - 11:15am CEST
As agentic AI systems evolve from simple LLM interfaces into autonomous and multi-agent workflows. Given the high autonomy of agentic AI systems, there is a growing need to perform a detailed risk assessment, which means traditional LLM-focused red teaming is no longer enough. Unlike standalone LLMs with text input and output, agentic systems interact with tools, memory, external data, and other agents, creating many new attack surfaces. Attacks may be introduced through emails, tool descriptions, or environmental content, and their impact can go beyond model responses to affect system behavior, planning, and perform harmful real-world actions.

In this talk, we share our hands-on journey building a comprehensive red teaming scanning solution tailored for agentic AI systems. We begin by analyzing why current scanning tools fall short, specifically their emphasis on structured components (e.g., protocols like MCP, A2A, and Skills) while overlooking unstructured and highly dynamic attack vectors where most real-world risks emerge. We then walk through the technical challenges of simulating realistic attacks without harming production environments, handling the diversity of agent architectures, frameworks, and agency-levels, and designing scanners that generalize across heterogeneous systems.

We present a practical full scanning pipeline that creates a novel holistic solution, including sandboxing and emulation strategies, automated system discovery pipelines, abstraction-based scanning mechanisms, and a risk-aware robustness scoring framework that goes beyond binary attack success. Throughout the talk, we highlight concrete lessons learned, trade-offs between cost and reliability, and real examples of agent-specific vulnerabilities.
We conclude with a concrete end-to-end scanning workflow and discuss open challenges such as adaptive scanner generation and black-box agent discovery. Attendees will leave with a deep understanding of why agentic AI requires fundamentally new red teaming methodologies and with actionable techniques for securing real-world autonomous AI systems.
Speakers
avatar for Roman Vainshtein

Roman Vainshtein

Research Director, GenAI Trust, Fujitsu Research of Europe

I am Research Director of the Generative AI Trust and Security Research team at Fujitsu Research of Europe, where I lead efforts to enhance the security, trustworthiness, and resilience of Generative AI systems. My work focuses on bridging the gap between AI security, red-teaming... Read More →
avatar for Amit Giloni

Amit Giloni

Principal Researcher, GenAI Trust team, Fujitsu Research

Dr. Amit Giloni is a Principal Researcher at Fujitsu Research of Europe, where she is part of the GenAI Trust team.
Her research spans multiple areas of machine learning, including classical ML, deep learning, generative AI, and agentic AI. She focuses on key challenges in trustworthy AI, such as bias and fairness, explainability, adversarial machine learning, robustness to abnormalities, and confidentiality... Read More →
avatar for Roy Betser

Roy Betser

Senior Researcher, GenAI Trust team, Fujitsu Research

Roy Betser is a PhD candidate int he Technion and an AI security senior researcher in Fujitsu Research of Europe, where heis part of the GenAI Trust team. His research focuses on analyzing representation and embedding spaces in foundation models and on developing practical trust and... Read More →
Thursday June 25, 2026 10:30am - 11:15am CEST
Hall G2 (Level -2)
  Testing

11:30am CEST

OWASP masCon - Unveiling The Internals From Multiplatform Mobile Runtimes
Thursday June 25, 2026 11:30am - 11:55am CEST
Flutter, React and Unity are the main multiplatform runtimes of choice when developing mobile applications for iOS and Android. We will cover the main characteristics, starting with the programming language associated with the framework, the ecosystem, the toolchains and showcase some clever low level details in their implementations. Recovering code and data from the final release binaries with the help of the opensource plugins for radare2.
Speakers
avatar for Sergi Alvarez

Sergi Alvarez

Mobile Security Research Engineer, NowSecure
Pancake is a mobile security research engineer at NowSecure. It has more than 25 years of experience in the reverse engineering and security fields. Author and maintainer of tools like radare2, r2frida and other plugins around the radare ecosystem, he began working as a forensic analyst... Read More →
Thursday June 25, 2026 11:30am - 11:55am CEST
Room -2.33 (Level -2)

1:15pm CEST

One IDE to Rule Them All - Securing Your Supply Chain’s Weakest Link
Thursday June 25, 2026 1:15pm - 2:00pm CEST
Your API keys, business logic, database connections, sometimes even customer data and user information - might be all directly accessible from your IDE. This makes the IDE in one of the top spots for threat actors to try and break into.

Because the IDE has direct access to so much data, it makes your entire software supply chain to be as secure as a single extension, turning it to the weakest link in the chain.

It takes only one evil extension, one vulnerability or one prompt, to compromise your entire organization. We will explore how each of these attack scenarios can turn a developer’s workspace into a gateway for threat actors to exfiltrate customer data before a single line of code is even written.

We’ll dive deep into the IDEs architecture, starting from how IDE extensions are developed and their permissions stack, and how threat actors could manipulate extensions and IDE configurations to bypass security measures including the ability to exfiltrate valuable information from the developer’s IDE, then perform lateral movement directly after infection, and their ability to stay persistent even after being removed.
It's not just about threat actors hacking your IDE - they will go after everything in the organization that’s connected to it, and they will try to stay there as long as possible.

We’ll take a look at how threat actors could leverage vulnerabilities that lie in existing IDE extensions to execute remote code & exfiltrate information - transforming a developer's local machine into an under the radar backdoor of your organization. This includes our finding of multiple 0-day vulnerabilities in popular IDE extensions, and our research of weaponizing Chromium 1-day vulnerabilities on Cursor & Windsurf.

We’ll wrap up by giving the best practice recommendations for securing your IDE, avoiding evil extensions, adding company-wide policies and for approved extensions, and showing security teams how to integrate IDE security into their organization at scale.
Speakers
avatar for Moshe Siman Tov Bustan

Moshe Siman Tov Bustan

Security Research Team Leader, OX Security

Moshe is a Security Research Team Lead at OX Security, a company specializing in software supply chain security, and has worked in the security industry for 13 years. His work spans cloud security research, container security, memory forensics, and an in-depth understanding of programming... Read More →
avatar for Nir Zadok

Nir Zadok

OX Security

Nir Zadok is a rocket scientist who got a bit bored, so he moved to cybersecurity. Since then, as a Whitehat, he has managed to break dozens of mobile, web, and desktop applications. These days Nir is focused on software supply chain and innovative attack vector research via widely... Read More →
Thursday June 25, 2026 1:15pm - 2:00pm CEST
Hall K1 (Level -2)

1:15pm CEST

Retiring CVE Chasing: Defending Against Application Exploit Techniques
Thursday June 25, 2026 1:15pm - 2:00pm CEST
Vulnerability scanners are everywhere. CVE databases are growing exponentially. Yet vulnerability exploitation has surpassed phishing as the leading initial access vector. What's going wrong?

The problem isn’t a lack of vulnerability data – it’s that defenders are solving last year’s problems. While teams drown in CVE backlogs, attackers use AI to rapidly weaponize exploit techniques that work across vulnerability classes. OS command injection, deserialization, and path traversal aren't just individual CVEs – they're attack patterns that persist regardless of patch status.

This session introduces the Application Attack Matrix, the first comprehensive, community-driven framework mapping tactics, techniques, and procedures used against modern applications. Built by contributors from Mandiant, Microsoft, AWS, and Meta, it does for application security what MITRE ATT&CK did for enterprise defense.

You’ll learn how to shift from reactive CVE remediation to proactive technique-based defense, understand which exploit patterns dominate real-world attacks, and prioritize security controls that protect against entire attack classes, not just individual CVEs.
Speakers
avatar for Idan Elor

Idan Elor

Field CTO, Oligo Security,

Idan Elor is Field CTO at Oligo Security, where he partners with large enterprises to solve complex application and cloud security challenges. He most recently served as Director of Solution Engineering & Tech-Alliances at Apiiro, where he empowered enterprises to secure their software... Read More →
Thursday June 25, 2026 1:15pm - 2:00pm CEST
Hall G1 (Level -2)

1:15pm CEST

The Map of Artificial Treasures: What to Automate in Security - and Why?
Thursday June 25, 2026 1:15pm - 2:00pm CEST
With the rise of AI, especially large language models, it seems every security workflow will soon be automated or heavily supported by automation - from LLM-powered threat-intelligence enrichment or compliance mappings to AI-written threat models, codefixes and complete CISO roadmaps. But which processes will truly benefit, and in which cases will AI just increase the risk of adding cost and complexity? As security managers or leaders, how can we determine where to focus our efforts and investments upfront?

This talk presents a practical framework for evaluating the effectiveness of AI-driven automation in application security and related fields. First, we explore how to identify processes that are strong candidates for automation based on criteria such as repeatability, return on investment, and risk tolerance. Then, we map typical security processes to AI approaches, including large language models (LLMs), traditional machine learning, retrieval-augmented generation (RAG), and hybrid systems.

We will learn how these solutions are applied to critical security areas, such as vulnerability management, secure software development, threat detection, and compliance. We will explore an AI Capability Map, industry benchmarks, and real-world examples, such as the use of RAG-powered chatbots for security guidance and LLMs for compliance analysis. Our goal is to help you determine where AI would be a good fit for your organization and where you would likely see measurable value when applying it, so that you can make informed decisions. Also, we will examine the available data: In which areas of the industry is value already being recognized? We explore potential pitfalls, from fragile LLM implementations to poor risk modeling, and discuss how to avoid wasting resources.

Using industry data, real-world experience, and structured criteria, this talk provides security leaders and practitioners with more guidance in this rapidly evolving field.
Speakers
avatar for Michael Helwig

Michael Helwig

Senior Security Consultant, secureIO GmbH

I am security consultant and founder of secureIO GmbH, a consulting company that focuses on building application security programs and consulting clients from different industries on secure software development and compliance. I am focussing on DevSecOps, security testing, AI automation... Read More →
Thursday June 25, 2026 1:15pm - 2:00pm CEST
Hall D (Level -2)

2:15pm CEST

OWASP masCon - Attacking ART
Thursday June 25, 2026 2:15pm - 2:40pm CEST
When analyzing the security of mobile applications, we often have to overcome local security controls to perform a thorough audit. This can include obtaining access to the application’s internal storage, disabling TLS pinning or forcing the application to use our interception proxy.
For many applications, this is straightforward. We can install the app on our rooted device, inject Frida and accomplish all of the above. However, this gets tricky when the application has implemented resiliency controls, known as Runtime Application Self Protection (RASP).

In this talk, I will zoom in on one lesser-known technique targeting the Android Runtime (ART): Manipulating ODEX/VDEX files. Any code implemented in Java/Kotlin can easily be manipulated without leaving any traces.
Speakers
avatar for Jeroen Beckers

Jeroen Beckers

Mobile Solution Lead, NVISO

I am the mobile solution lead at NVISO, where I am responsible for quality delivery, innovation and methodology for all mobile assessments. I am actively involved in the mobile security community, and I try to share my knowledge through open-source tools, blogposts, trainings and... Read More →
Thursday June 25, 2026 2:15pm - 2:40pm CEST

2:15pm CEST

Beyond the Chatbox: Implementing Guardrails for Autonomous Agents and LLMs Using Tools
Thursday June 25, 2026 2:15pm - 3:00pm CEST
As LLMs evolve from passive text generators to autonomous Agentic AI, the attack surface is shifting from simple prompt injection to Excessive Agency and Goal Hijacking. When we grant agents the power to execute shell commands, call sensitive APIs, or modify cloud infrastructure, we are essentially deploying "unattended administrators" into our environments.

This session moves past theoretical AI risks to provide a hands-on blueprint for securing autonomous actors.I will explore the newly released OWASP Top 10 for Agentic Applications 2026, focusing on critical vulnerabilities like ASI02 (Tool Misuse) and ASI05 (Unexpected Code Execution). Attendees will leave with a practical framework for implementing "Least-Agency" architecture, hardware-enforced sandboxing, and real-time intent validation.
Speakers
avatar for Rovindra Kumar

Rovindra Kumar

Security Architect, Google

Around 14+ years of experience in defining a Secure strategy, Architecture, and implementation of necessary security controls aligned with Security Services, including Cloud Security, Threat Protection, and implementation of cloud-native security controls. Providing thoughts leadership... Read More →
avatar for Mikesh Khanal

Mikesh Khanal

Security Engineer, Google

Mikesh is a senior cloud security engineer at Google with more than a decade experience, specializing in designing and implementing robust security architectures for organizations of all sizes. He is a recognized expert in cloud security design and architecture, compliance, and risk... Read More →
Thursday June 25, 2026 2:15pm - 3:00pm CEST
Hall G1 (Level -2)
 
Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.
Filtered by Date -