Loading…
Venue: Hall K1 (Level -2) clear filter
arrow_back View All Dates
Friday, June 26
 

10:30am CEST

When AI Attacks AI: Inside the Self-Propagating Botnet Built on Compromised AI Infrastructure
Friday June 26, 2026 10:30am - 11:15am CEST
ShadowRay did not disappear after disclosure.
Despite extensive public reporting and technical analysis, the campaign remains active and continues to expand in scale, with more than 230,000 exposed Ray endpoints and an order-of-magnitude increase in observed exploitation.

Enter a self-propagating botnet built from compromised machine-learning clusters, all running on Ray—the de facto execution layer of modern AI infrastructure, embedded across production training pipelines, inference services, and internal compute platforms.

This is ShadowRay 2.0.

The attackers weaponized Ray's orchestration features to spread autonomously across exposed servers, turning victims into both mining rigs and propagation nodes.

We'll walk through the concrete evidence that enabled the researchers to stop the attack in real time by finding billions worth of compute that were compromised. This includes LLM-generated payloads evolving in real-time, GPU cryptojacking, competitor miner elimination scripts, how Ray's own APIs were weaponized for lateral movement, and more.

The talk also reveals the techniques employed by the attackers to evade detection, employing CI/CD for malware distribution, and building multi-purpose capabilities beyond cryptojacking, including DDoS, data exfiltration, and more. This is AI infrastructure turned against itself, at internet scale with verifiable proof.
Speakers
avatar for Gal Elbaz

Gal Elbaz

Co-founder & CTO, Oligo Security

Co-founder & CTO at Oligo Security with 10+ years of experience in vulnerability research and practical hacking. He previously worked as a Security Researcher at CheckPoint and served in the IDF Intelligence. In his free time, he enjoys playing CTFs.    linkedin.com/in/gal-elb... Read More →
avatar for Avi Lumelsky

Avi Lumelsky

AI Security Researcher, Oligo Security

Avi has a relentless curiosity about business, AI, security—and the places where all three connect. An experienced software engineer and architect, Avi’s cybersecurity skills were first honed in elite Israeli intelligence units. His work focuses on privacy in the age of AI and... Read More →
Friday June 26, 2026 10:30am - 11:15am CEST
Hall K1 (Level -2)

11:30am CEST

Infrastructure Doesn’t Lie: Using Infrastructure Signals to Detect Shadow AI Built Applications
Friday June 26, 2026 11:30am - 12:15pm CEST
AI app builders now enable production apps to ship without repositories, CI/CD, or security review, often by non-traditional developers outside established engineering workflows. These Shadow AI apps bypass AppSec pipelines and governance, creating a growing blind spot in enterprise environments. This talk demonstrates how DNS, TLS, and hosting signals can detect shadow AI apps that existing controls miss.
Speakers
avatar for Balachandra Shanabhag

Balachandra Shanabhag

Product Security Lead, Cerebras

Bala is working as Staff security Engineer for Cohesity. Bala has over 15 years of experience in various domains of cybersecurity. Bala Joined Cohesity as Founding Product Security Engineer and helped boot strap Appsec and other security initiatives. Before Cohesity Bala worked at... Read More →
Friday June 26, 2026 11:30am - 12:15pm CEST
Hall K1 (Level -2)

2:15pm CEST

Marketplace Takeover: One Bug Away from Pwning 10 Million Developer Machines
Friday June 26, 2026 2:15pm - 3:00pm CEST
This is the story of a single CI bug with the potential of compromising more than 10 million workstations - with a full takeover - for anyone using popular tools like Cursor and Windsurf (so every developer, really).

Learn about a critical flaw - that will be shared by the team who first identified it - in [open-vsx.org](http://open-vsx.org/), the open-source marketplace powering nearly every VSCode fork, including Cursor, Windsurf, Gitpod, StackBlitz, and Google Cloud Shell Editor.

The vulnerability sat in the project's GitHub Actions workflow, which automatically builds and publishes extensions using a privileged service token. By triggering the workflow with a crafted dependency, an attacker could run arbitrary code during npm install, exfiltrate the marketplace's OVSX_PAT token, and use it to overwrite or republish any extension in the registry. From there, the blast radius is absolute and devastating.
Any developer using a VSCode fork that auto-updates extensions would receive malicious payloads without interaction — compromising local machines, CI/CD environments, and downstream software.

This session breaks down the exploit path, the disclosure timeline, and the architectural weaknesses that made it possible. It highlights the systemic risk of ungoverned extension ecosystems and how "app store" mechanics in developer tooling have quietly become high-value attack surfaces.

But don't panic. We'll wrap with concrete mitigations like: isolating build runners from publishing credentials, auditing workflow environments for untrusted dependency execution, and implementing continuous marketplace governance to prevent similar full-ecosystem takeovers.
Speakers
avatar for Oren Yomtov

Oren Yomtov

Principal Security Researcher, Koi Security

Oren Yomtov is a Principal Security Researcher at Koi, where he focuses on advancing research in software and blockchain security. He brings extensive experience from his work at Fireblocks, contributing to research on digital asset security and blockchain infrastructure.

Previous... Read More →
avatar for Yuval Ronen

Yuval Ronen

Security Researcher, Koi Security

Yuval Ronen leads the security research at Koi, focusing on vulnerability research, threat intelligence, and developing detection methods to strengthen defenses across modern software ecosystems. He brings over seven years of experience in both offensive and defensive cybersecurity... Read More →
Friday June 26, 2026 2:15pm - 3:00pm CEST
Hall K1 (Level -2)

3:30pm CEST

From Safety to Policy: Enforcing Organizational Rules in LLMs and AI Agents
Friday June 26, 2026 3:30pm - 4:15pm CEST
Organizations deploying GenAI systems quickly discover that safety controls do not automatically enforce organizational policies. Real environments operate under large and evolving sets of domains, organization-specific and external policies driven by legal requirements, industry regulations, and internal governance rules, and they change periodically. Enforcing these rules in production is not a one-time setup problem; it is a continuous governance and operations challenge.

Existing guardrail solutions are not designed to handle custom, large-scale, and continuously evolving organizational policies. When AI agent developers or AI security teams attempt to stretch these safety-oriented systems into general policy enforcement, their underlying design assumptions no longer hold because they assume a small, static policy space rather than a broad and heterogeneous one. Static rules such as regex become unmaintainable and produce unreliable detection at scale, fine-tuned classifiers require constant retraining, and LLM-as-a-judge pipelines, even when carefully calibrated, are expensive to run, introduce non-trivial latency and are difficult to audit.

This talk describes how we stress-tested existing compliance approaches, including static guardrails, fine-tuned detectors, and LLM-as-a-judge pipelines, and analyzed how they degrade under realistic policy complexity.
We present a reframing of the problem: instead of relying solely on output-level judgments, policy violations can also be detected directly in the model’s internal space with a training-free approach. We explain what this shift enables in practice, including continuous compliance monitoring, policy updates without retraining loops, and improved auditability. We also discuss the limitations of this advanced approach.

We also address a deeper conceptual issue that emerged from our error analysis: in practice, the boundary between “policies” and “instructions” is often unclear, and treating instructions as if they were policies leads to confusing and brittle failure modes. Today, both alignment boundaries and performance or business objectives are commonly expressed using the same mechanism—rules or instructions—blurring fundamentally different concerns under a single notion of “policy.” This separation is critical: some instructions define organizational and alignment constraints, while others encode task goals and performance requirements. Conflating these concepts results in misaligned controls, as they require different enforcement strategies and, in many cases, different ownership and roles within the organization.

The goal of this talk is to provide AppSec and GRC teams with a clearer mental model for operating LLM policy compliance in production, a checklist of questions to ask about existing guardrail solutions, and a better understanding of what it actually takes to keep LLM systems compliant over time.
Speakers
avatar for Oren Rachmil

Oren Rachmil

Senior AI Researcher,, Fujitsu Research of Europe

Oren Rachmil is a Senior AI Researcher at Fujitsu Research of Europe, working on the safety, evaluation, and security of large language model systems. His recent research focuses on analyzing gaps in open-source LLM vulnerability scanners, understanding evaluator reliability, and... Read More →
Friday June 26, 2026 3:30pm - 4:15pm CEST
Hall K1 (Level -2)
 
Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.
Filtered by Date -