Loading…
Type: 2-Day Training clear filter
Tuesday, June 23
 

9:00am CEST

2-Day Training: Adam Shostack's Threat Modeling Intensive
Tuesday June 23, 2026 9:00am - 5:00pm CEST
2-Day Training: June 23-24, 2026
Level: Intermediate
Trainer: Adam Shostack

To register, please purchase your training ticket here. Training and conference are two separate ticket purchases.

This hands-on, interactive class will focus on learning to threat model by executing each of the steps. Students will start with a guided threat modeling exercise, and we'll then iterate and break down the skills they're learning in more depth. We'll progressing through the Four Questions of Threat Modeling: what are we working on, what can go wrong, what are we going to do about it and did we do a good job. This is capped off with an end-to-end exercise that brings the skills together.
Speakers
avatar for Adam Shostack

Adam Shostack

Founder, Shostack & Associates
Adam Shostack is a leading expert on threat modeling. He has decades of experience delivering security. His experience ranges across the business world from founding startups to nearly a decade at Microsoft. His accomplishments include:  Helped create the CVE. Now an Emeritus member... Read More →
Tuesday June 23, 2026 9:00am - 5:00pm CEST

9:00am CEST

2-Day Training: AI SecureOps: Attacking & Defending AI Applications and Agents (Hybrid)
Tuesday June 23, 2026 9:00am - 5:00pm CEST
2-Day Training: June 23-24, 2026
Level: Intermediate
Trainer:Abhinav Singh

You may attend this training course either in person or virutally

To register, please purchase your training ticket here. Training and conference are two separate ticket purchases.

Can prompt injections lead to complete infrastructure takeovers? Could AI agents be exploited to compromise backend services? Can jailbreaks create false crisis alerts in security systems? In multi-agent systems, what if an attacker takes over an agent’s goals, turning other agents into coordinated threats? This immersive, CTF-styled training in AI and LLM security dives into these pressing questions. Engage in realistic attack and defense scenarios focused on real-world threats, from prompt injection and remote code execution to backend compromise. Tackle hands-on challenges with actual AI applications & agentic systems to understand vulnerabilities and develop robust defenses. You’ll learn how to create a comprehensive security pipeline, mastering AI red and blue team strategies, building resilient defenses for AI apps & agents, and handling incident response for AI-based threats. Additionally, implement a Responsible AI (RAI) program to enforce ethical AI standards across enterprise services, fortifying your organization’s AI security foundation.

By the end of this training, you will be able to:

- Exploit vulnerabilities in AI applications to achieve code and command execution, uncovering scenarios such as instruction injection, agent control bypass, remote code execution for infrastructure takeover as well as chaining multiple agents for goal hijacking.
- Conduct AI red-teaming using adversary simulation, OWASP LLM Top 10, and MITRE ATLAS frameworks, while applying AI security and ethical principles in real-world scenarios.
- Execute and defend against adversarial attacks, including prompt injection, data poisoning, jailbreaks and agentic attacks.
- Perform advanced AI red and blue teaming through multi-agent auto-prompting attacks, implementing a 3-way autonomous system consisting of attack, defend and judge models.
- Develop LLM security scanners to detect and protect against injections, jailbreaks, manipulations, and risky behaviors, as well as defending LLMs with LLMs.
- Build and deploy enterprise-grade LLM defenses, including custom guardrails for input/output protection, security benchmarking, and penetration testing of LLM agents.
- Establish a comprehensive LLM SecOps process to secure the supply chain from adversarial attacks and create a robust threat model for enterprise applications.
- Implement an incident response and risk management plan for enterprises developing or using GenAI services.
Speakers
avatar for Abhinav Singh

Abhinav Singh

Cyber Security Research in AI,Cloud & Data, Midfield Security
Abhinav Singh is an esteemed cybersecurity leader and researcher with over a decade of experience working with global technology leaders, startups, financial institutions, and as an independent trainer and consultant. He is the author of the widely acclaimed "Metasploit Penetration... Read More →
Tuesday June 23, 2026 9:00am - 5:00pm CEST

9:00am CEST

2-Day Training: Repeatable, Scalable and Valuable Code Security Scanning
Tuesday June 23, 2026 9:00am - 5:00pm CEST
2-Day Training: June 23-24, 2026
Level: Intermediate
Trainer:Josh Grossman

To register, please purchase your training ticket here. Training and conference are two separate ticket purchases.

To learn more about this training, please visit the link here

Suddenly anyone and everyone in your organization can use AI assistants to write code. Meanwhile, your actual developers are putting out 100x their previous output , with “varying” levels of quality. So how are you going to secure code at this scale?

This course is designed to be a deep dive into state-of-the-art techniques for validating code security within an organization’s codebase. The course has a strong emphasis on how AI-driven analysis can drive this forward whilst also clearly highlighting where standard, deterministic techniques (albeit incorporating AI acceleration) will be more effective.

During the course, you will learn how to combine these techniques, in a scalable and repeatable way, based on our experience doing just this with real organizations and real teams and with a focus on the current state of the art in this fast-moving area.

This course goes beyond the scope of standard application security knowledge and is designed to make you a specialist in this area. Having spent several years perfecting this process, we are excited to impart the lessons we have learnt!

The course is structured as follows:

* Overview – setting out the basic details of what we will be talking about in terms of code scanning and SAST.
* Key techniques – Discuss the different techniques which can be used for this including generic “off the shelf” SAST, deterministic custom scanning rules, and LLM powered custom AI prompts
* Technique comparison - Advantages and disadvantages of each technique based on our in-depth experience with each and which technique you will want to use in different situations, to avoid wasting time trying to use a technique in an inappropriate use case.
* Organizational process – How to get these processes built into an organization’s existing software lifecycle
* Generic SAST – Using “off the shelf” rules effectively to catch “low hanging fruit” and avoid reinventing the wheel.
* Custom SAST – Introduce custom rule languages (e.g., Semgrep, CodeQL), writing rules from scratch, and scaling analysis across a codebase.
* Basic AI Code Security Scanning – Overview of AI-based scanning, platforms, principles, and initial single-shot prompts.
* Complex AI Code Security Scanning – AI-driven techniques for code security, including using AI to review and triage findings and creating multi-stage rules that combine deterministic rules
Speakers
avatar for Josh Grossman

Josh Grossman

CTO, Bounce Security
Josh Grossman has worked as a consultant in IT and Application Security and Risk for 15 years now, as well as a Software Developer. This has given him an in-depth understanding of how to manage the balance between business needs, developer needs and security needs which goes into... Read More →
Tuesday June 23, 2026 9:00am - 5:00pm CEST

9:00am CEST

2-Day Training: The Mobile Playbook - A guide for iOS and Android App Security (Hybrid)
Tuesday June 23, 2026 9:00am - 5:00pm CEST
2-Day Training: June 23-24, 2026
Level: Intermediate
Trainer:Sven Schleier

You may attend this training course in person or virtually.

To register, please purchase your training ticket here. Training and conference are two separate ticket purchases.

This two-day, hands-on course is designed to teach penetration testers, developers, and engineers how to analyse Android and iOS applications for security vulnerabilities. The course covers the different phases of testing, including dynamic testing, static analysis, reverse engineering, and software composition analysis (SCA). We will also explore how you can use the Model Context Protocol (MCP) to automate some of these workflows and leverage its strengths.

The course is based on the OWASP Mobile Application Security Testing Guide (MASTG) and taught by one of the project co-leaders. This comprehensive, open-source mobile security testing book covers both iOS and Android, providing a methodology and detailed technical test cases to ensure completeness and utilizes the latest attack techniques against mobile applications. This course provides hands-on experience with open-source tools and advanced methodologies, guiding you through real-world scenarios.

Detailed outline

On the first day, we will start with an introduction to the OWASP MASVS and MASTG projects, including the latest updates. Then, we will dive into the Android platform and its security architecture. Students will no longer be required to bring their own Android device; instead, each student will be provided with a cloud-based, virtualised Android device from Corellium.

Topics include:

- Intercepting network traffic of an Android App in various scenarios, including intercepting traffic that is not HTTP.
- Scanning for secrets in an APK.
- Reverse engineering a Kotlin app and identifying and exploiting a real-world deep link vulnerability through manual source code review.
- Static Scanning of decompiled Kotlin source code by using MCP workflows with semgrep and radare2, identifying vulnerabilities and eliminating false positives.
- Frida crash course to get started with dynamic instrumentation on Android apps by using MCP workflows.
- Use dynamic instrumentation with Frida to bypass client-side security controls such as root detection mechanisms.
- We will close day 1 with a Capture the Flag (CTF) by attacking several apps, including a real world app and overcome it's protection mechanisms.

Day 2 focuses on iOS. We will begin the day by exploring the OWASP MASWE and creating an iOS test environment using Corellium and dive into several topics, including:

- Introduction into iOS Security fundamentals
- Intercepting network traffic of an iOS App in various scenarios, including intercepting traffic from apps written in mobile app frameworks such as Google's Flutter.
- How to retrieve an IPA, execute static scanning of an IPA and identifying vulnerabilities and eliminating false positives.
- Software Composition Analysis (SCA) for iOS by using SBOM's and scanning 3rd party libraries and SDKs in mobile package managers for known vulnerabilities and planning mitigation strategies.
- Frida crash course to get started with dynamic instrumentation for iOS applications and utilsing MCP workflows.
- Testing methodology with a non-jailbroken (jailed) device by repackaging an IPA with the Frida gadget.
- Analyse the storage of an iOS app and understand the various options on how (files, databases, logs etc.) and where files can be stored.
- Using Frida to bypass runtime instrumentation of iOS applications, like anti-Jailbreaking Mechanisms.

We'll wrap up the final day with a CTF and participants can win a prize!

Whether you are a beginner who wants to learn mobile app testing from the ground up, or an experienced pentester or developer or engineer who wants to improve your existing skills to perform more advanced attack techniques, this training will help you achieve your goals.

The course consists of many different hands-on labs developed by the instructor or using real world apps that are part of bug bounty platforms.

Upon successfully completing this course, students will have a better understanding of how to test for vulnerabilities in mobile applications, how to recommend appropriate mitigation techniques to developers and how to perform consistent and efficient testing using MCP (Model Context Protocol) workflows.
Speakers
avatar for Sven Schleier

Sven Schleier

Co-Founder, Bai7 GmbH
Sven is a co-founder of Bai7 GmbH in Austria, which is specialized in trainings and advisory. He has expertise in cloud security, offensive security engagements (Penetration Testing) and Application Security, notably in guiding software development teams across Mobile and Web Applications... Read More →
Tuesday June 23, 2026 9:00am - 5:00pm CEST
 
Wednesday, June 24
 

9:00am CEST

2-Day Training: Adam Shostack's Threat Modeling Intensive
Wednesday June 24, 2026 9:00am - 5:00pm CEST
2-Day Training: June 23-24, 2026
Level: Intermediate
Trainer: Adam Shostack

To register, please purchase your training ticket here. Training and conference are two separate ticket purchases.

This hands-on, interactive class will focus on learning to threat model by executing each of the steps. Students will start with a guided threat modeling exercise, and we'll then iterate and break down the skills they're learning in more depth. We'll progressing through the Four Questions of Threat Modeling: what are we working on, what can go wrong, what are we going to do about it and did we do a good job. This is capped off with an end-to-end exercise that brings the skills together.
Speakers
avatar for Adam Shostack

Adam Shostack

Founder, Shostack & Associates
Adam Shostack is a leading expert on threat modeling. He has decades of experience delivering security. His experience ranges across the business world from founding startups to nearly a decade at Microsoft. His accomplishments include:  Helped create the CVE. Now an Emeritus member... Read More →
Wednesday June 24, 2026 9:00am - 5:00pm CEST

9:00am CEST

2-Day Training: AI SecureOps: Attacking & Defending AI Applications and Agents (Hybrid)
Wednesday June 24, 2026 9:00am - 5:00pm CEST
2-Day Training: June 23-24, 2026
Level: Intermediate
Trainer:Abhinav Singh

You may attend this training course in person or virtually

To register, please purchase your training ticket here. Training and conference are two separate ticket purchases.

Can prompt injections lead to complete infrastructure takeovers? Could AI agents be exploited to compromise backend services? Can jailbreaks create false crisis alerts in security systems? In multi-agent systems, what if an attacker takes over an agent’s goals, turning other agents into coordinated threats? This immersive, CTF-styled training in AI and LLM security dives into these pressing questions. Engage in realistic attack and defense scenarios focused on real-world threats, from prompt injection and remote code execution to backend compromise. Tackle hands-on challenges with actual AI applications & agentic systems to understand vulnerabilities and develop robust defenses. You’ll learn how to create a comprehensive security pipeline, mastering AI red and blue team strategies, building resilient defenses for AI apps & agents, and handling incident response for AI-based threats. Additionally, implement a Responsible AI (RAI) program to enforce ethical AI standards across enterprise services, fortifying your organization’s AI security foundation.

By the end of this training, you will be able to:

- Exploit vulnerabilities in AI applications to achieve code and command execution, uncovering scenarios such as instruction injection, agent control bypass, remote code execution for infrastructure takeover as well as chaining multiple agents for goal hijacking.
- Conduct AI red-teaming using adversary simulation, OWASP LLM Top 10, and MITRE ATLAS frameworks, while applying AI security and ethical principles in real-world scenarios.
- Execute and defend against adversarial attacks, including prompt injection, data poisoning, jailbreaks and agentic attacks.
- Perform advanced AI red and blue teaming through multi-agent auto-prompting attacks, implementing a 3-way autonomous system consisting of attack, defend and judge models.
- Develop LLM security scanners to detect and protect against injections, jailbreaks, manipulations, and risky behaviors, as well as defending LLMs with LLMs.
- Build and deploy enterprise-grade LLM defenses, including custom guardrails for input/output protection, security benchmarking, and penetration testing of LLM agents.
- Establish a comprehensive LLM SecOps process to secure the supply chain from adversarial attacks and create a robust threat model for enterprise applications.
- Implement an incident response and risk management plan for enterprises developing or using GenAI services.
Speakers
avatar for Abhinav Singh

Abhinav Singh

Cyber Security Research in AI,Cloud & Data, Midfield Security
Abhinav Singh is an esteemed cybersecurity leader and researcher with over a decade of experience working with global technology leaders, startups, financial institutions, and as an independent trainer and consultant. He is the author of the widely acclaimed "Metasploit Penetration... Read More →
Wednesday June 24, 2026 9:00am - 5:00pm CEST

9:00am CEST

2-Day Training: Repeatable, Scalable and Valuable Code Security Scanning
Wednesday June 24, 2026 9:00am - 5:00pm CEST
2-Day Training: June 23-24, 2026
Level: Intermediate
Trainer:Josh Grossman

To register, please purchase your training ticket here. Training and conference are two separate ticket purchases.

To learn more about this training, please visit the link here.

Suddenly anyone and everyone in your organization can use AI assistants to write code. Meanwhile, your actual developers are putting out 100x their previous output , with “varying” levels of quality. So how are you going to secure code at this scale?

This course is designed to be a deep dive into state-of-the-art techniques for validating code security within an organization’s codebase. The course has a strong emphasis on how AI-driven analysis can drive this forward whilst also clearly highlighting where standard, deterministic techniques (albeit incorporating AI acceleration) will be more effective.

During the course, you will learn how to combine these techniques, in a scalable and repeatable way, based on our experience doing just this with real organizations and real teams and with a focus on the current state of the art in this fast-moving area.

This course goes beyond the scope of standard application security knowledge and is designed to make you a specialist in this area. Having spent several years perfecting this process, we are excited to impart the lessons we have learnt!

The course is structured as follows:

* Overview – setting out the basic details of what we will be talking about in terms of code scanning and SAST.
* Key techniques – Discuss the different techniques which can be used for this including generic “off the shelf” SAST, deterministic custom scanning rules, and LLM powered custom AI prompts
* Technique comparison - Advantages and disadvantages of each technique based on our in-depth experience with each and which technique you will want to use in different situations, to avoid wasting time trying to use a technique in an inappropriate use case.
* Organizational process – How to get these processes built into an organization’s existing software lifecycle
* Generic SAST – Using “off the shelf” rules effectively to catch “low hanging fruit” and avoid reinventing the wheel.
* Custom SAST – Introduce custom rule languages (e.g., Semgrep, CodeQL), writing rules from scratch, and scaling analysis across a codebase.
* Basic AI Code Security Scanning – Overview of AI-based scanning, platforms, principles, and initial single-shot prompts.
* Complex AI Code Security Scanning – AI-driven techniques for code security, including using AI to review and triage findings and creating multi-stage rules that combine deterministic rules
Speakers
avatar for Josh Grossman

Josh Grossman

CTO, Bounce Security
Josh Grossman has worked as a consultant in IT and Application Security and Risk for 15 years now, as well as a Software Developer. This has given him an in-depth understanding of how to manage the balance between business needs, developer needs and security needs which goes into... Read More →
Wednesday June 24, 2026 9:00am - 5:00pm CEST

9:00am CEST

2-Day Training: The Mobile Playbook - A guide for iOS and Android App Security (Hybrid)
Wednesday June 24, 2026 9:00am - 5:00pm CEST
2-Day Training: June 23-24, 2026
Level: Intermediate
Trainer:Sven Schleier

You may attend this training course in person or virtually.

To register, please purchase your training ticket here. Training and conference are two separate ticket purchases.

This two-day, hands-on course is designed to teach penetration testers, developers, and engineers how to analyse Android and iOS applications for security vulnerabilities. The course covers the different phases of testing, including dynamic testing, static analysis, reverse engineering, and software composition analysis (SCA). We will also explore how you can use the Model Context Protocol (MCP) to automate some of these workflows and leverage its strengths.

The course is based on the OWASP Mobile Application Security Testing Guide (MASTG) and taught by one of the project co-leaders. This comprehensive, open-source mobile security testing book covers both iOS and Android, providing a methodology and detailed technical test cases to ensure completeness and utilizes the latest attack techniques against mobile applications. This course provides hands-on experience with open-source tools and advanced methodologies, guiding you through real-world scenarios.

Detailed outline

On the first day, we will start with an introduction to the OWASP MASVS and MASTG projects, including the latest updates. Then, we will dive into the Android platform and its security architecture. Students will no longer be required to bring their own Android device; instead, each student will be provided with a cloud-based, virtualised Android device from Corellium.

Topics include:

- Intercepting network traffic of an Android App in various scenarios, including intercepting traffic that is not HTTP.
- Scanning for secrets in an APK.
- Reverse engineering a Kotlin app and identifying and exploiting a real-world deep link vulnerability through manual source code review.
- Static Scanning of decompiled Kotlin source code by using MCP workflows with semgrep and radare2, identifying vulnerabilities and eliminating false positives.
- Frida crash course to get started with dynamic instrumentation on Android apps by using MCP workflows.
- Use dynamic instrumentation with Frida to bypass client-side security controls such as root detection mechanisms.
- We will close day 1 with a Capture the Flag (CTF) by attacking several apps, including a real world app and overcome it's protection mechanisms.

Day 2 focuses on iOS. We will begin the day by exploring the OWASP MASWE and creating an iOS test environment using Corellium and dive into several topics, including:

- Introduction into iOS Security fundamentals
- Intercepting network traffic of an iOS App in various scenarios, including intercepting traffic from apps written in mobile app frameworks such as Google's Flutter.
- How to retrieve an IPA, execute static scanning of an IPA and identifying vulnerabilities and eliminating false positives.
- Software Composition Analysis (SCA) for iOS by using SBOM's and scanning 3rd party libraries and SDKs in mobile package managers for known vulnerabilities and planning mitigation strategies.
- Frida crash course to get started with dynamic instrumentation for iOS applications and utilsing MCP workflows.
- Testing methodology with a non-jailbroken (jailed) device by repackaging an IPA with the Frida gadget.
- Analyse the storage of an iOS app and understand the various options on how (files, databases, logs etc.) and where files can be stored.
- Using Frida to bypass runtime instrumentation of iOS applications, like anti-Jailbreaking Mechanisms.

We'll wrap up the final day with a CTF and participants can win a prize!

Whether you are a beginner who wants to learn mobile app testing from the ground up, or an experienced pentester or developer or engineer who wants to improve your existing skills to perform more advanced attack techniques, this training will help you achieve your goals.

The course consists of many different hands-on labs developed by the instructor or using real world apps that are part of bug bounty platforms.

Upon successfully completing this course, students will have a better understanding of how to test for vulnerabilities in mobile applications, how to recommend appropriate mitigation techniques to developers and how to perform consistent and efficient testing using MCP (Model Context Protocol) workflows.
Speakers
avatar for Sven Schleier

Sven Schleier

Co-Founder, Bai7 GmbH
Sven is a co-founder of Bai7 GmbH in Austria, which is specialized in trainings and advisory. He has expertise in cloud security, offensive security engagements (Penetration Testing) and Application Security, notably in guiding software development teams across Mobile and Web Applications... Read More →
Wednesday June 24, 2026 9:00am - 5:00pm CEST
 
Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.