Loading…
Audience: AppSec Engineers clear filter
arrow_back View All Dates
Thursday, June 25
 

12:15pm CEST

Hunting Critical CVEs: A Hands-On, Pick-Your-Own Exploitation POD
Thursday June 25, 2026 12:15pm - 2:15pm CEST
New CVEs are released constantly, but in practice most teams never go beyond reading the advisory or relying on automated scanning. This POD is designed to change that by giving participants time and platform to hunt and exploit real-world critical CVEs.

Participants will have access to 10 hands-on challenges, each based on a real high or critical severity CVE commonly found in modern applications. Each challenge runs within a limited time window and can be attempted independently of the others.

For each challenge, participants can click a Deploy Lab option to spin up a temporary target system. The deployed application/system contains a previously undisclosed CVE to the participant, and the task is to identify the vulnerability, understand its behavior, and exploit it to demonstrate impact.

There is no fixed order or linear walkthrough. Participants are free to choose which CVEs to attempt, how deep they want to go with each one, and how long they want to stay in the activity. Some CVEs will allow participants to become admin, some might give a reverse shell. Labs are provisioned on demand using infrastructure-as-code, allowing participants to work independently on each challenge.

Some participants may focus on understanding a single CVE and reproducing it reliably. Others may try to exploit multiple issues or explore alternate attack paths. Both approaches are expected and encouraged.

The emphasis of this POD is on building practical intuition: how to read advisories critically, identify vulnerable attack surfaces, validate exploitability, and understand real impact beyond severity scores. The activity is fully hands-on, informal, and designed so people can join and leave at any time without falling behind.
Speakers
avatar for Abhinav Mishra

Abhinav Mishra

Founder, Cyber Security Guy

Abhinav Mishra is a cyber security practitioner with over 14 years of hands-on experience in vulnerability research, offensive security, and application security testing. He has carried out 1,000+ security reviews and penetration tests across web, mobile, API, and cloud-based systems... Read More →
Thursday June 25, 2026 12:15pm - 2:15pm CEST
Room -2.92 (Level -2)

12:15pm CEST

“2001: Agentic Odyssey” When threat modelling meets HAL, agentic AI, testing and safety engineering
Thursday June 25, 2026 12:15pm - 2:15pm CEST
“2001: Agentic Odyssey” is a hands-on, drop-in POD where we threat model the HAL 9000 system from 2001: A Space Odyssey as if it were a modern agentic AI system (LLM + tools + permissions + side effects). I bring a HAL DFD, and together we mark trust boundaries and do classic “what can go wrong?” threat identification. Participants then split into small groups to build attack-tree branches and translate them into Fault Tree Analysis (FTA) using AND/OR logic and minimal cut sets, including lightweight probability estimates to prioritise the most likely failure chains. We finish by turning those failure paths into automation-ready test ideas (fault injection, invariants, evidence), and optionally drafting a structured HAL threat model for submission to the OWASP Threat Model Library. Designed so anyone can contribute in 10-15 minutes, while advanced participants can go deep on FTA and prioritisation. Every stage is split into a way to enable drop-ins at any time.
Speakers
avatar for Petra Vukmirovic

Petra Vukmirovic

Head of Information Security at Numan and Fractional Head of Product, Devarmor

Petra is a technology enthusiast, leader and public speaker. A former emergency medicine doctor and competitive volleyball athlete, she thrives in challenging environments and loves creating order from chaos. Initially pursuing a medical career, Petra's passion for technology led... Read More →
Thursday June 25, 2026 12:15pm - 2:15pm CEST
Room -2.92 (Level -2)

2:30pm CEST

From Prompts to Payloads: Exploiting the AI-AppSec Intersection
Thursday June 25, 2026 2:30pm - 4:30pm CEST
LLMs are no longer standalone chatbots—they're increasingly embedded directly into application logic, with access to databases, APIs, file systems, and internal services. This architectural shift means the most dangerous LLM exploits don't just manipulate the model; they use the model as an attack vector to reach traditional AppSec targets. Prompt injection becomes a path to SQL injection. Conversational manipulation enables SSRF. The AI agent becomes an unwitting insider threat.

In this hands-on POD, participants will experience this convergence firsthand through a purpose-built vulnerable web application with an integrated AI agent. Through independent challenges, attendees will discover how attackers chain LLM manipulation with classic web exploitation—and why securing AI-integrated applications requires understanding both domains.

Challenges are designed for drop-in participation and cover multiple difficulty levels:
- Beginner-friendly: Basic prompt manipulation and information disclosure
- Intermediate: Chaining AI misuse with traditional web exploitation
- Advanced: Multi-stage attacks combining indirect prompt injection with server-side vulnerabilities

Each challenge is self-contained (under 15 minutes) with clear objectives, hints available on request, and facilitators ready to guide participants. Whether you're new to AI security or a seasoned pentester curious about LLM attack vectors, you'll walk away with practical techniques applicable to real-world assessments.

Challenges are mapped to multiple OWASP frameworks: the OWASP Top 10 for LLM Applications (covering risks like LLM01: Prompt Injection, LLM07: Insecure Plugin Design), the OWASP API Security Top 10, and the classic OWASP Web Application Top 10, helping participants connect new AI risks to established security knowledge.

No prior AI/ML experience required. Just curiosity and a laptop with a modern browser. All challenges run in-browser against our cloud-hosted lab environment.
Speakers
avatar for Dan Lisichkin

Dan Lisichkin

AI Security Researcher
Dan Lisichkin is the Cyber Security Researcher for Pillar Security, focusing on AI security, adversarial threats, and securing AI based systems. With over five years of experience in the cybersecurity and IT space, Dan has extensive knowledge in areas including malware analysis, reverse... Read More →
avatar for Ziv Karliner

Ziv Karliner

CTO, Pillar Security

Ziv Karliner is the Co-Founder and CTO of Pillar Security, where he works on securing AI-powered applications and agent-based systems. With over a decade of experience in cybersecurity, Ziv has led research and engineering efforts across application security, cloud security, financial... Read More →
avatar for Eilon Cohen

Eilon Cohen

AI Security Researcher, Pillar Security
That kid who took apart all his toys to see how they worked.
Currently breaking (and fixing) things in Pillar Security lab. Education spans from Mechanical Engineering and Robotics to Computer science, but a self-made security researcher and practitioner. Ex-IBM as a security engineer, securing multiple complex cloud and IT environments, now... Read More →
avatar for Ariel Fogel

Ariel Fogel

Founding Engineer & Researcher, Pillar Security

Ariel Fogel is a founding engineer & researcher at Pillar Security, where he hardens AI applications against real-world attacks and compliance risks. Over the past decade, he has built production systems in Ruby, TypeScript, Python, and SQL, shipping everything from full-stack web... Read More →
Thursday June 25, 2026 2:30pm - 4:30pm CEST
Room -2.92 (Level -2)
 
Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.
Filtered by Date -