Loading…
Venue: Hall G2 (Level -2) clear filter
arrow_back View All Dates
Friday, June 26
 

10:30am CEST

Your Localhost Is Lying to You: Trust Boundary Failures in Enterprise SSO
Friday June 26, 2026 10:30am - 11:15am CEST
When an attacker lands on a user’s machine, your SSO should not hand them the keys to your network. Yet many enterprise systems do because they assume localhost subdomains are safe. They are not.

This talk shows how a common DNS misconfiguration (localhost.target.com → 127.0.0.1), combined with domain-wide cookies (Domain=.target.com), allows a locally executed request context to inherit an authenticated session. No XSS. No phishing. Just browser-native behavior.

This flaw is rarely detected by scanners or standard penetration tests, yet it appears in real enterprise deployments today. The session presents a practical testing methodology, a defensive checklist, and research-based validation techniques to assess this class of trust boundary failure safely.

Attendees will leave able to identify and fix this issue in their own SSO deployments next week.
Speakers
avatar for Rupesh Kumar

Rupesh Kumar

Application Security Researcher | Red Team Practitioner

Rupesh Kumar is an offensive security researcher with 1.5 years of experience in web application testing, vulnerability research, and red team operations. He has reported critical and high-severity vulnerabilities to organizations across government, defense, healthcare, and critical... Read More →
Friday June 26, 2026 10:30am - 11:15am CEST
Hall G2 (Level -2)

11:30am CEST

Effort is All You Need: Testing LLM Applications in the Real World
Friday June 26, 2026 11:30am - 12:15pm CEST
Security testing of GenAI systems is often reduced to "LLM red teaming": probing a model in isolation to see what unsafe/offensive content it will generate. In practice, this approach falls short. As security practitioners, we need to assess complete LLM application use cases, focusing on how inputs and outputs propagate through application logic and enable concrete security risks such as data exfiltration, cross-site scripting, and authorization bypass.

In this talk, we share practical experience and supporting open-source tooling we developed for assessing LLM applications. These focus on testing systems where the LLM is embedded in application logic rather than exposed as a simple inference endpoint.

It covers approaches for testing non-conversational GenAI workflows, WebSockets, and custom APIs; building scoped prompt injection datasets aligned with application logic and engagement constraints; applying effort-based jailbreak techniques (e.g. anti-spotlighting, best-of-n, crescendo, ...) to evaluate guardrail robustness and demonstrate practical bypasses; and conducting meaningful testing in isolated or air-gapped environments.

Speakers
avatar for Donato Capitella

Donato Capitella

Principal Security Consultant, Reversec

Donato Capitella is a Software Engineer and Principal Security Consultant at Reversec, with over 15 years of experience in offensive security and software engineering. Donato spent the past 3 years conducting research and assessments on Generative AI applications, covering topics... Read More →
avatar for Thomas Cross

Thomas Cross

Security Consultant, Reversec

Friday June 26, 2026 11:30am - 12:15pm CEST
Hall G2 (Level -2)

1:15pm CEST

What Our Pen Tests Never Found — And How Attackers Did
Friday June 26, 2026 1:15pm - 2:00pm CEST
Penetration testing is a crucial part of application security practices, yet attackers often succeed in ways no test ever reported. No injection, no memory corruption, no failed authentication. The applications behaved exactly as designed — and that was enough.

In this talk, we will explore what penetrating testing is intended to detect and how attackers actually compromise the systems. This talk will address why well-scoped penetration testing frequently revealed "no critical findings" while attackers later leveraged legitimate workflows, permission assumptions, and trust boundaries to cause serious harm.
Based on real world examples and post incident analysis, this talk will walk through security issues that were frequently overlooked during testing, not because testers lacked skills, but because the testing process made assumptions that attackers did not follow. We will focus on examining the blind spots in the penetration testing process, which include behaviors that only appear in production, cross-feature chaining, abuse of business logic, and trust assumptions built into system architecture.

The objective of this talk will be to comprehend where pen testing ends and how defenders might modify their testing tactics accordingly, rather than to replace it. This talk will break down the classes of issues pen tests routinely miss, how attackers discover them post-deployment, and what changed when testing strategies shifted from endpoint coverage to adversary-aware validation.

Attendees will leave with practical techniques to evolve their AppSec testing without increasing cost or abandoning penetration testing.
Speakers
avatar for Ramya M

Ramya M

Application Analyst, Okta, Inc,

Ramya M is a cybersecurity professional, currently working at Okta, Inc., specializing in application security, product security, identity security, and secure SDLC automation. She has led enterprise-scale initiatives across secure coding, DevSecOps hardening, vulnerability triage... Read More →
Friday June 26, 2026 1:15pm - 2:00pm CEST
Hall G2 (Level -2)

2:15pm CEST

Trust No History: Why Every "Remembered" Interaction is a Potential Backdoor
Friday June 26, 2026 2:15pm - 3:00pm CEST
As AI transitions from stateless tools to autonomous agents, the context window has become the primary attack surface. By giving agents the ability to remember, summarize, and collaborate, we have created a machine that can be gaslit. This session moves beyond transient prompt injections into the realm of persistent memory corruption. We explore how an adversary can rewrite an agent’s history, bias its knowledge base, and plant sleeper instructions that trigger long after the initial interaction. We will dissect the systematic subversion of the agentic memory stack and demonstrate why developers must stop treating agent memory as a passive data store and start defending it as the engine of the agent’s survival
Speakers
avatar for Rico Komenda

Rico Komenda

Senior Security Consultant

Rico is a senior product security engineer. His main security areas are in application security, cloud security, offensive security and AI security.

For him, general security intelligence in various aspects is a top priority. Today’s security world is constantly changing and you... Read More →
avatar for Barno Kaharova

Barno Kaharova

Senior Consultant, AI Security Expert, adesso SE

Barno is a expert specializing in data engineering, data modeling, and machine learning security. Driven by a passion for innovation, she develops cutting-edge methodologies to protect AI systems from adversarial threats, pushing the boundaries of what’s possible in AI security... Read More →
Friday June 26, 2026 2:15pm - 3:00pm CEST
Hall G2 (Level -2)

3:30pm CEST

Rewriting DAST Playbook: AI Agents and the Future of Web App Security
Friday June 26, 2026 3:30pm - 4:15pm CEST
The landscape of DAST (Dynamic Application Security Testing) tools is evolving to address modern web application complexities. While these tools are effective at detecting classic vulnerabilities like injection flaws, misconfigurations, and broken access control, they struggle with JavaScript-heavy SPAs, complex workflows, file upload/download analysis, and second-order vulnerabilities. To improve, modern DAST solutions are beginning to integrate AI-driven agentic browsers (e.g., Playwright + AI), out-of-band payloads, timing-based testing, and workflow-aware automation to better simulate real user behavior and detect deeper, context-sensitive issues.
Speakers
avatar for Divyansh Jain

Divyansh Jain

Application Security Analyst, Checkmarx Ltd.

Divyansh Jain is a passionate security engineer with experience in building and enhancing automated vulnerability scanners, focusing on issues like IDOR, broken access control, and authentication flaws. He has contributed extensively to open-source security tools, improved detection... Read More →
avatar for Aditya Dixit

Aditya Dixit

Application Security Analyst, Checkmarx Ltd.

Security Analyst with a hybrid background in software engineering, artificial intelligence, and cybersecurity. Experienced in developing AI/ML solutions and now focused on securing intelligent systems against emerging threats. Areas of interest include application security, adversarial... Read More →
Friday June 26, 2026 3:30pm - 4:15pm CEST
Hall G2 (Level -2)
  Testing
 
Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.
Filtered by Date -