Loading…
Type: Deployment and Maintenance clear filter
Thursday, June 25
 

10:30am CEST

Why Isn't the Fix in My Container? Tracking CVE Propagation Across 10,000 Projects
Thursday June 25, 2026 10:30am - 11:15am CEST
We analyzed CVE remediation patterns across 10,000 open source projects to uncover a critical problem: vulnerabilities fixed upstream often take weeks or months to reach downstream containers. This lag creates massive security exposure windows in Kubernetes environments.

In this talk, we'll present our findings showing how CVE fixes flow (or stall) across ecosystem layers, from upstream projects to package managers to base images to final containers. You'll see real metrics on remediation delays, and the compounding effect of layered dependencies.

But we won't stop at the problem. The second half focuses on practical solutions. From automated patch backporting to in-place image patching with tools like Copa. You'll learn how to build workflows that dramatically reduce MTTR, including dependency automation patterns and risk-based prioritization.

Attendees will leave with both a data-driven understanding of the CVE remediation challenge and a practical playbook for fixing it.
Speakers
avatar for Lior Kaplan

Lior Kaplan

Open Source evangelist, Open Source Security expert, Kaplan Open Source
As a Linux sysadmin for many years, Kaplan has being focused Open Source & Security from various perspectives - upstream projects, the Linux distributions and the DevOps / platform engineering teams who maintain the infrastructure.
Kaplan is a long time Open Source community membe... Read More →
avatar for Mor Weinberger

Mor Weinberger

Software Architect, Echo

Mor is a Software Architect specializing in cloud-native security and software supply chain resilience. His work focuses on designing scalable systems to detect and mitigate emerging threats across modern cloud environments. Over the years, he has identified issues ranging from unsecured... Read More →
Thursday June 25, 2026 10:30am - 11:15am CEST
Hall K1 (Level -2)

11:30am CEST

Actionable Continuous SBOM Diffing
Thursday June 25, 2026 11:30am - 12:15pm CEST
SBOMs are known to be at the forefront of modern strategies to ensure supply chain security. However, there are two key problems that traditional SBOM workflows do not solve: working with components that do not have well-established identifiers and the introduction of malware in the supply chain.

This presents a significant gap between the expectations of SBOM adoption and the real value it can deliver. This talk will explore the concept of applying continuous SBOM diffing as part of the CI process. Rather than analyzing an SBOM for each release as a standalone artifact, we can compute diffs and take actions based on whether something has changed from the previous component release.

This approach makes all SBOM components actionable, even those that otherwise seem meaningless. For example, if an individual file that is not part of any library appears in an SBOM, legacy approaches make it difficult to reason about such a file. However, with continuous SBOM diffing, tracking changes in such components becomes meaningful and therefore actionable. For example, if a new component file appears with an unknown origin, we can sanitize the build and conduct additional investigations into what happened.

We will also demonstrate practical examples of how to achieve such actionable workflows using open-source tooling.
Speakers
avatar for Pavel Shukhman

Pavel Shukhman

CEO, Reliza

Pavel Shukhman is Co-Founder and CEO of Reliza, where he oversees the company's efforts in managing software and hardware releases, xBOMs, versioning and component identification. With over a decade of experience leading software teams, he has helped organizations implement DevOps... Read More →
Thursday June 25, 2026 11:30am - 12:15pm CEST
Hall K1 (Level -2)

1:15pm CEST

One IDE to Rule Them All - Securing Your Supply Chain’s Weakest Link
Thursday June 25, 2026 1:15pm - 2:00pm CEST
Your API keys, business logic, database connections, sometimes even customer data and user information - might be all directly accessible from your IDE. This makes the IDE in one of the top spots for threat actors to try and break into.

Because the IDE has direct access to so much data, it makes your entire software supply chain to be as secure as a single extension, turning it to the weakest link in the chain.

It takes only one evil extension, one vulnerability or one prompt, to compromise your entire organization. We will explore how each of these attack scenarios can turn a developer’s workspace into a gateway for threat actors to exfiltrate customer data before a single line of code is even written.

We’ll dive deep into the IDEs architecture, starting from how IDE extensions are developed and their permissions stack, and how threat actors could manipulate extensions and IDE configurations to bypass security measures including the ability to exfiltrate valuable information from the developer’s IDE, then perform lateral movement directly after infection, and their ability to stay persistent even after being removed.
It's not just about threat actors hacking your IDE - they will go after everything in the organization that’s connected to it, and they will try to stay there as long as possible.

We’ll take a look at how threat actors could leverage vulnerabilities that lie in existing IDE extensions to execute remote code & exfiltrate information - transforming a developer's local machine into an under the radar backdoor of your organization. This includes our finding of multiple 0-day vulnerabilities in popular IDE extensions, and our research of weaponizing Chromium 1-day vulnerabilities on Cursor & Windsurf.

We’ll wrap up by giving the best practice recommendations for securing your IDE, avoiding evil extensions, adding company-wide policies and for approved extensions, and showing security teams how to integrate IDE security into their organization at scale.
Speakers
avatar for Moshe Siman Tov Bustan

Moshe Siman Tov Bustan

Security Research Team Leader, OX Security

Moshe is a Security Research Team Lead at OX Security, a company specializing in software supply chain security, and has worked in the security industry for 13 years. His work spans cloud security research, container security, memory forensics, and an in-depth understanding of programming... Read More →
avatar for Nir Zadok

Nir Zadok

OX Security

Nir Zadok is a rocket scientist who got a bit bored, so he moved to cybersecurity. Since then, as a Whitehat, he has managed to break dozens of mobile, web, and desktop applications. These days Nir is focused on software supply chain and innovative attack vector research via widely... Read More →
Thursday June 25, 2026 1:15pm - 2:00pm CEST
Hall K1 (Level -2)

2:15pm CEST

From 0 to SLSA Level 3: A Practitioner's Field Guide
Thursday June 25, 2026 2:15pm - 3:00pm CEST
SLSA (Supply-chain Levels for Software Artifacts) promises to secure your software supply chain—but implementing it at enterprise scale is harder than the spec suggests. This talk shares our journey to SLSA Level 3, including the architectural decisions, performance trade-offs, and customer escalations that shaped our approach.

You'll learn:
- Provenance attestation architecture for multi-tenant CI/CD pipelines
- How to integrate SLSA verification without breaking existing workflows
- Real metrics: what SLSA costs in CI minutes and what attacks it actually catches
- Common implementation pitfalls and how to avoid them

Whether you're just starting your SLSA journey or stuck at Level 2, walk away with battle-tested patterns that work at scale.
Speakers
avatar for Mark Mishaev

Mark Mishaev

Senior Engineering Manager, Software Supply Chain Security, Gitlab

Senior Manager of Software Supply Chain Security at GitLab, leading 40+ engineers across Authentication, Authorization, Pipeline Security, and Compliance teams. He drives GitLab's SLSA implementation and security architecture for CI/CD pipelines serving millions of developers.
Wit... Read More →
Thursday June 25, 2026 2:15pm - 3:00pm CEST
Hall K1 (Level -2)

3:30pm CEST

Pragmatic least-privilege for cloud and Kubernetes: applying good advice to real systems
Thursday June 25, 2026 3:30pm - 4:15pm CEST
Whichever public cloud you use, there are literally hundreds of assignable permissions — and while everyone quotes the ideal of “least privilege,” just when the deadline looms it becomes far too tempting to grant “just one more permission.” Before you know it, your developer teams and service accounts are swimming in high privileges.

In this session we’ll start from the basics of structured permission management, then go deeper — all the way to time-limited access, rule-based privileged-access workflows, and on-demand role elevation. We won’t rehash each cloud provider’s security guide; instead, we’ll deliver pragmatic, maintainable, and flexible guidelines that balance solid permission hygiene with the realities of tight deadlines.

This talk is targeted at security engineers, cloud engineers or anyone just looking for a point to start organizing and structuring their permission approach.
Speakers
avatar for Mark Vinkovits

Mark Vinkovits

Chief Information Security Officer, XUND Solutions

Mark worked as software, security, and privacy engineer over the past decade. Since his research in user centered computing, he has been arguing that human behavior, beliefs, and motivations cannot be excluded from the design of any solution, including any SDLC that should be livable... Read More →
Thursday June 25, 2026 3:30pm - 4:15pm CEST
Hall K1 (Level -2)
 
Friday, June 26
 

10:30am CEST

When AI Attacks AI: Inside the Self-Propagating Botnet Built on Compromised AI Infrastructure
Friday June 26, 2026 10:30am - 11:15am CEST
ShadowRay did not disappear after disclosure.
Despite extensive public reporting and technical analysis, the campaign remains active and continues to expand in scale, with more than 230,000 exposed Ray endpoints and an order-of-magnitude increase in observed exploitation.

Enter a self-propagating botnet built from compromised machine-learning clusters, all running on Ray—the de facto execution layer of modern AI infrastructure, embedded across production training pipelines, inference services, and internal compute platforms.

This is ShadowRay 2.0.

The attackers weaponized Ray's orchestration features to spread autonomously across exposed servers, turning victims into both mining rigs and propagation nodes.

We'll walk through the concrete evidence that enabled the researchers to stop the attack in real time by finding billions worth of compute that were compromised. This includes LLM-generated payloads evolving in real-time, GPU cryptojacking, competitor miner elimination scripts, how Ray's own APIs were weaponized for lateral movement, and more.

The talk also reveals the techniques employed by the attackers to evade detection, employing CI/CD for malware distribution, and building multi-purpose capabilities beyond cryptojacking, including DDoS, data exfiltration, and more. This is AI infrastructure turned against itself, at internet scale with verifiable proof.
Speakers
avatar for Gal Elbaz

Gal Elbaz

Co-founder & CTO, Oligo Security

Co-founder & CTO at Oligo Security with 10+ years of experience in vulnerability research and practical hacking. He previously worked as a Security Researcher at CheckPoint and served in the IDF Intelligence. In his free time, he enjoys playing CTFs.    linkedin.com/in/gal-elb... Read More →
avatar for Avi Lumelsky

Avi Lumelsky

AI Security Researcher, Oligo Security

Avi has a relentless curiosity about business, AI, security—and the places where all three connect. An experienced software engineer and architect, Avi’s cybersecurity skills were first honed in elite Israeli intelligence units. His work focuses on privacy in the age of AI and... Read More →
Friday June 26, 2026 10:30am - 11:15am CEST
Hall K1 (Level -2)

11:30am CEST

Infrastructure Doesn’t Lie: Using Infrastructure Signals to Detect Shadow AI Built Applications
Friday June 26, 2026 11:30am - 12:15pm CEST
AI app builders now enable production apps to ship without repositories, CI/CD, or security review, often by non-traditional developers outside established engineering workflows. These Shadow AI apps bypass AppSec pipelines and governance, creating a growing blind spot in enterprise environments. This talk demonstrates how DNS, TLS, and hosting signals can detect shadow AI apps that existing controls miss.
Speakers
avatar for Balachandra Shanabhag

Balachandra Shanabhag

Product Security Lead, Cerebras

Bala is working as Staff security Engineer for Cohesity. Bala has over 15 years of experience in various domains of cybersecurity. Bala Joined Cohesity as Founding Product Security Engineer and helped boot strap Appsec and other security initiatives. Before Cohesity Bala worked at... Read More →
Friday June 26, 2026 11:30am - 12:15pm CEST
Hall K1 (Level -2)

2:15pm CEST

Marketplace Takeover: One Bug Away from Pwning 10 Million Developer Machines
Friday June 26, 2026 2:15pm - 3:00pm CEST
This is the story of a single CI bug with the potential of compromising more than 10 million workstations - with a full takeover - for anyone using popular tools like Cursor and Windsurf (so every developer, really).

Learn about a critical flaw - that will be shared by the team who first identified it - in [open-vsx.org](http://open-vsx.org/), the open-source marketplace powering nearly every VSCode fork, including Cursor, Windsurf, Gitpod, StackBlitz, and Google Cloud Shell Editor.

The vulnerability sat in the project's GitHub Actions workflow, which automatically builds and publishes extensions using a privileged service token. By triggering the workflow with a crafted dependency, an attacker could run arbitrary code during npm install, exfiltrate the marketplace's OVSX_PAT token, and use it to overwrite or republish any extension in the registry. From there, the blast radius is absolute and devastating.
Any developer using a VSCode fork that auto-updates extensions would receive malicious payloads without interaction — compromising local machines, CI/CD environments, and downstream software.

This session breaks down the exploit path, the disclosure timeline, and the architectural weaknesses that made it possible. It highlights the systemic risk of ungoverned extension ecosystems and how "app store" mechanics in developer tooling have quietly become high-value attack surfaces.

But don't panic. We'll wrap with concrete mitigations like: isolating build runners from publishing credentials, auditing workflow environments for untrusted dependency execution, and implementing continuous marketplace governance to prevent similar full-ecosystem takeovers.
Speakers
avatar for Oren Yomtov

Oren Yomtov

Principal Security Researcher, Koi Security

Oren Yomtov is a Principal Security Researcher at Koi, where he focuses on advancing research in software and blockchain security. He brings extensive experience from his work at Fireblocks, contributing to research on digital asset security and blockchain infrastructure.

Previous... Read More →
avatar for Yuval Ronen

Yuval Ronen

Security Researcher, Koi Security

Yuval Ronen leads the security research at Koi, focusing on vulnerability research, threat intelligence, and developing detection methods to strengthen defenses across modern software ecosystems. He brings over seven years of experience in both offensive and defensive cybersecurity... Read More →
Friday June 26, 2026 2:15pm - 3:00pm CEST
Hall K1 (Level -2)

3:30pm CEST

From Safety to Policy: Enforcing Organizational Rules in LLMs and AI Agents
Friday June 26, 2026 3:30pm - 4:15pm CEST
Organizations deploying GenAI systems quickly discover that safety controls do not automatically enforce organizational policies. Real environments operate under large and evolving sets of domains, organization-specific and external policies driven by legal requirements, industry regulations, and internal governance rules, and they change periodically. Enforcing these rules in production is not a one-time setup problem; it is a continuous governance and operations challenge.

Existing guardrail solutions are not designed to handle custom, large-scale, and continuously evolving organizational policies. When AI agent developers or AI security teams attempt to stretch these safety-oriented systems into general policy enforcement, their underlying design assumptions no longer hold because they assume a small, static policy space rather than a broad and heterogeneous one. Static rules such as regex become unmaintainable and produce unreliable detection at scale, fine-tuned classifiers require constant retraining, and LLM-as-a-judge pipelines, even when carefully calibrated, are expensive to run, introduce non-trivial latency and are difficult to audit.

This talk describes how we stress-tested existing compliance approaches, including static guardrails, fine-tuned detectors, and LLM-as-a-judge pipelines, and analyzed how they degrade under realistic policy complexity.
We present a reframing of the problem: instead of relying solely on output-level judgments, policy violations can also be detected directly in the model’s internal space with a training-free approach. We explain what this shift enables in practice, including continuous compliance monitoring, policy updates without retraining loops, and improved auditability. We also discuss the limitations of this advanced approach.

We also address a deeper conceptual issue that emerged from our error analysis: in practice, the boundary between “policies” and “instructions” is often unclear, and treating instructions as if they were policies leads to confusing and brittle failure modes. Today, both alignment boundaries and performance or business objectives are commonly expressed using the same mechanism—rules or instructions—blurring fundamentally different concerns under a single notion of “policy.” This separation is critical: some instructions define organizational and alignment constraints, while others encode task goals and performance requirements. Conflating these concepts results in misaligned controls, as they require different enforcement strategies and, in many cases, different ownership and roles within the organization.

The goal of this talk is to provide AppSec and GRC teams with a clearer mental model for operating LLM policy compliance in production, a checklist of questions to ask about existing guardrail solutions, and a better understanding of what it actually takes to keep LLM systems compliant over time.
Speakers
avatar for Oren Rachmil

Oren Rachmil

Senior AI Researcher,, Fujitsu Research of Europe

Oren Rachmil is a Senior AI Researcher at Fujitsu Research of Europe, working on the safety, evaluation, and security of large language model systems. His recent research focuses on analyzing gaps in open-source LLM vulnerability scanners, understanding evaluator reliability, and... Read More →
Friday June 26, 2026 3:30pm - 4:15pm CEST
Hall K1 (Level -2)
 
Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.