Loading…
Thursday June 25, 2026 2:30pm - 4:30pm CEST
LLMs are no longer standalone chatbots—they're increasingly embedded directly into application logic, with access to databases, APIs, file systems, and internal services. This architectural shift means the most dangerous LLM exploits don't just manipulate the model; they use the model as an attack vector to reach traditional AppSec targets. Prompt injection becomes a path to SQL injection. Conversational manipulation enables SSRF. The AI agent becomes an unwitting insider threat.

In this hands-on POD, participants will experience this convergence firsthand through a purpose-built vulnerable web application with an integrated AI agent. Through independent challenges, attendees will discover how attackers chain LLM manipulation with classic web exploitation—and why securing AI-integrated applications requires understanding both domains.

Challenges are designed for drop-in participation and cover multiple difficulty levels:
- Beginner-friendly: Basic prompt manipulation and information disclosure
- Intermediate: Chaining AI misuse with traditional web exploitation
- Advanced: Multi-stage attacks combining indirect prompt injection with server-side vulnerabilities

Each challenge is self-contained (under 15 minutes) with clear objectives, hints available on request, and facilitators ready to guide participants. Whether you're new to AI security or a seasoned pentester curious about LLM attack vectors, you'll walk away with practical techniques applicable to real-world assessments.

Challenges are mapped to multiple OWASP frameworks: the OWASP Top 10 for LLM Applications (covering risks like LLM01: Prompt Injection, LLM07: Insecure Plugin Design), the OWASP API Security Top 10, and the classic OWASP Web Application Top 10, helping participants connect new AI risks to established security knowledge.

No prior AI/ML experience required. Just curiosity and a laptop with a modern browser. All challenges run in-browser against our cloud-hosted lab environment.
Speakers
avatar for Dan Lisichkin

Dan Lisichkin

AI Security Researcher
Dan Lisichkin is the Cyber Security Researcher for Pillar Security, focusing on AI security, adversarial threats, and securing AI based systems. With over five years of experience in the cybersecurity and IT space, Dan has extensive knowledge in areas including malware analysis, reverse... Read More →
avatar for Ziv Karliner

Ziv Karliner

CTO, Pillar Security

Ziv Karliner is the Co-Founder and CTO of Pillar Security, where he works on securing AI-powered applications and agent-based systems. With over a decade of experience in cybersecurity, Ziv has led research and engineering efforts across application security, cloud security, financial... Read More →
avatar for Eilon Cohen

Eilon Cohen

AI Security Researcher, Pillar Security
That kid who took apart all his toys to see how they worked.
Currently breaking (and fixing) things in Pillar Security lab. Education spans from Mechanical Engineering and Robotics to Computer science, but a self-made security researcher and practitioner. Ex-IBM as a security engineer, securing multiple complex cloud and IT environments, now... Read More →
avatar for Ariel Fogel

Ariel Fogel

Founding Engineer & Researcher, Pillar Security

Ariel Fogel is a founding engineer & researcher at Pillar Security, where he hardens AI applications against real-world attacks and compliance risks. Over the past decade, he has built production systems in Ruby, TypeScript, Python, and SQL, shipping everything from full-stack web... Read More →
Thursday June 25, 2026 2:30pm - 4:30pm CEST
Room -2.92 (Level -2)

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link