Loading…
Audience: Intermediate clear filter
arrow_back View All Dates
Wednesday, June 24
 

9:00am CEST

1-Day Training: API Security: Hands-On Secure API Design & Hardening
Wednesday June 24, 2026 9:00am - 5:00pm CEST

1-Day Training: June 24, 2026
Level: Intermediate
Trainer: Tanya Janca


To register, please purchase your training ticket here.
 Training and conference are two separate ticket purchases.

APIs are the backbone of modern applications—but they also introduce unique security risks. In this hands-on training, participants will deep-dive into API security threats using a "Bad, Better, Best" approach.

• Review real-world insecure APIs and step through progressive security improvements
• Work hands-on with OWASP DevSlop Pixi intentionally vulnerable API, 42Crunch IDE Plugin, and Semgrep to find, fix, and prevent API vulnerabilities
• Master the OWASP API Security Top Ten through guided code reviews and hands-on exercises
• Learn best practices for API security hardening, authentication, and monitoring

By the end of this session, participants will have the skills and tools to secure APIs with confidence using industry best practices.
Speakers
avatar for Tanya Janca

Tanya Janca

Security Trainer and Founder, She Hacks Purple & DevSec Station
Tanya Janca, known online as SheHacksPurple, is the best-selling author of Alice and Bob Learn Secure Coding and Alice and Bob Learn Application Security. She is the founder of DevSec Station, a modern learning platform and community built to help software developers master secure... Read More →
Wednesday June 24, 2026 9:00am - 5:00pm CEST

9:00am CEST

1-Day Training: How to build a Successful Security Champions Program
Wednesday June 24, 2026 9:00am - 5:00pm CEST
1-Day Training: June 24, 2026
Level: Intermediate
Trainer:Juliane Reimann & Marisa Fagan


To register, please purchase your training ticket here.
 Training and conference are two separate ticket purchases.

Do you feel a disconnect between your cybersecurity efforts and engineering activities? If so, a Security Champions Program could bridge the gap. By involving engineers in security topics that align with their work, a Security Champions program not only enhances security awareness but also fosters a culture of security across your organization. However, creating such a program requires careful planning, innovative strategies, and a solid understanding of what drives individuals to champion security initiatives.

This training will equip you with practical tools and actionable insights to design and launch a successful Security Champions Program. You’ll explore key concepts, including how to:
- Develop a foundational understanding of what a Security Champions Programs is
- Plan and navigate the phases of program development, from launch to long-term growth.
- Learn about strategies to engage and motivate diverse personality types within the organization
- Acquire practical tools and a structured approach to establish a scalable and trackable Security Champions Program

Whether you’re a security engineer, architect, or manager, this training will provide you with the tools and frameworks to collaborate effectively with your engineering teams and establish a thriving Security Champions Program.

The session is highly interactive, featuring hands-on exercises and team-based activities to encourage collaboration and networking with fellow professionals. Join us to gain the confidence and strategies you need to kickstart your journey toward a more secure organization.
Speakers
avatar for Juliane Reimann

Juliane Reimann

Founder and Security Community Expert, Full Circle Security
Juliane Reimann works as cyber security consultant for large companies since 2019 with focus on DevSecOps and Community Building. Her expertise includes building security communities of software developers and establishing developer centric communication about secure software development... Read More →
avatar for Marisa Fagan

Marisa Fagan

Managing Consultant, Katilyst
Marisa Fagan is a managing consultant at Katilyst and has 16 years experience building security champion communities. She's dedicated her career to building security into the SDLC and empowering developers to own secure code. Marisa shares practical insights into what actually works... Read More →
Wednesday June 24, 2026 9:00am - 5:00pm CEST

9:00am CEST

1-Day Training: Master AI Security (Hybrid)
Wednesday June 24, 2026 9:00am - 5:00pm CEST
1-Day Training: June 24, 2026
Level: Intermediate
Trainer: Rob van der Veer

You may attend this training course either in person or virtually

To register, please purchase your training ticket here. Training and conference are two separate ticket purchases.

The record-breaking Master AI security training is back!

This training broke the OWASP record online and on-site.

Your trainer is Rob van der Veer, Chief AI Officer at Software Improvement Group, with 33 years of AI experience, founder of the OWASP AI Exchange, co-editor for the AI Act security standard, member of the ISO/IEC 27090 for AI security, co-founder of OpenCRE, and main author of ISO 5338 on AI engineering.

Master AI security is a unique opportunity to become proficient in the intricate and rapidly evolving field of AI security.

The disruption by AI presents a significant challenge, regardless of whether you are a security professional, a developer, AI engineer, or a red teamer. What are your responsibilities? What constitutes the new AI attack surface, and what threats emerge from it? What measures can you take to mitigate these emerging risks?

This one-day intensive training program will equip you with the knowledge to tackle these AI-related challenges effectively, enabling you to apply what you learn immediately. Starting with a pragmatic overview of AI, the course then delivers an exhaustive exploration of the distinctive vulnerabilities AI introduces, the possible attack vectors, and the most current strategies to counteract threats like prompt injection, data poisoning, model theft, evasion, and more. Through practical exercises, you will gain hands-on experience in enacting strong security measures, attacking AI systems, conducting threat modelling on AI, and targeted vulnerability assessments for AI applications.

By day's end, you will possess a thorough comprehension of the core principles and techniques critical to strengthening AI systems. You will have gained practical insights and the confidence to implement cutting-edge AI security measures.

A key resource that is used in the training is the OWASP AI Exchange - the flagship project located at owaspai.org - which forms the foundation of ISO standard 27090 and the security standard of the AI Act.

The training is designed for all levels of attendees. as the material is new from the cutting edge of research and standardization. No in-depth security or AI knowledge is required, although some experience with either AI or security is helpful.

Attendees will be provided with handout slides and afterwards they can retrieve the unique Master AI security certificate.

Some testimonials of previous runs:
  • Stephan Cohen – BNP Paribas: “This training has significantly enhanced my understanding of both the challenges and controls in securing AI. Looking forward to applying these insights in my work. Thank you Rob for this course.”
  • Ramesh Krishnasaga - British Petroleum:  “The training was enlightening. This experience went beyond just training—it provided a strategic roadmap for securing AI applications in practical scenarios."
  • Jedidiah Y - S&P global: “A timely and essential training. The session was truly eye-opening! As a data scientist, I’ve always focused on building and optimizing models—accuracy, performance, and deployment. But this training completely shifted my perspective on the importance of security in AI systems."

Speakers
avatar for Rob van der Veer

Rob van der Veer

Chief AI Officer, Software Improvement Group
Rob van der Veer is an AI pioneer with 33 years of AI experience, specializing in engineering, security and privacy. He is the lead author of the ISO/IEC 5338 standard on AI lifecycle, contributor to OWASP SAMM, co-founder of OWASP's digital bridge for security standards OpenCRE... Read More →
Wednesday June 24, 2026 9:00am - 5:00pm CEST

9:00am CEST

1-Day Training: Secure-by-Design AI Applications: Identifying, Testing, and Validating AI-Specific Threats Before Deployment
Wednesday June 24, 2026 9:00am - 5:00pm CEST
1-Day Training: June 24, 2026
Level: Intermediate
Trainer: Marco Morana

**Threat Modeling book (85 euro value) free to the first 10 registrants**

To register, please purchase your training ticket here. Training and conference are two separate ticket purchases.

As organizations deploy LLMs, chatbots, RAG pipelines, and autonomous AI agents, new attack surfaces emerge that traditional application threat modeling cannot fully capture. This one-day course provides a practical, hands-on introduction to threat modeling AI applications, grounded in the OWASP AI Testing Guide, OWASP AI Exchange, NIST AI RMF, and Secure AI Framework (SAIF).

Participants learn how AI reshapes attack surfaces at the data, model, pipeline, and API layers, and how adversarial risks such as prompt injection, model theft, data poisoning, membership inference, and supply-chain compromise can be identified early and validated before deployment.

Through structured modeling exercises, ATLAS Navigator demos, AI SBOM analysis, attack-flow mapping, and secure-by-design patterns, learners translate AI threat models into actionable test cases aligned to OWASP AITG Test IDs and MITRE ATLAS. The course concludes with an end-to-end capstone where participants model and test a real-world LLM or RAG pipeline.

By the end of the workshop, participants will be able to identify, model, test, and validate AI-specific threats, embed AI testing into DevSecOps workflows, and operationalize AI threat modeling as a repeatable, testable practice for QA, security, and incident response.
Speakers
avatar for Marco Morana

Marco Morana

Field CISO- Head of Application & Product Security Architecture, Avocado Systems Inc.
Marco Morana is the Field CISO at Avocado Systems Inc., specializing in threat modeling automation and Zero Trust Architecture for financial services. With over 15 years of leadership experience, he has held senior security roles at JP Morgan Chase and Citi, securing financial applications... Read More →
Wednesday June 24, 2026 9:00am - 5:00pm CEST

9:00am CEST

2-Day Training: Adam Shostack's Threat Modeling Intensive
Wednesday June 24, 2026 9:00am - 5:00pm CEST
2-Day Training: June 23-24, 2026
Level: Intermediate
Trainer: Adam Shostack

To register, please purchase your training ticket here. Training and conference are two separate ticket purchases.

This hands-on, interactive class will focus on learning to threat model by executing each of the steps. Students will start with a guided threat modeling exercise, and we'll then iterate and break down the skills they're learning in more depth. We'll progressing through the Four Questions of Threat Modeling: what are we working on, what can go wrong, what are we going to do about it and did we do a good job. This is capped off with an end-to-end exercise that brings the skills together.
Speakers
avatar for Adam Shostack

Adam Shostack

Founder, Shostack & Associates
Adam Shostack is a leading expert on threat modeling. He has decades of experience delivering security. His experience ranges across the business world from founding startups to nearly a decade at Microsoft. His accomplishments include:  Helped create the CVE. Now an Emeritus member... Read More →
Wednesday June 24, 2026 9:00am - 5:00pm CEST

9:00am CEST

2-Day Training: AI SecureOps: Attacking & Defending AI Applications and Agents (Hybrid)
Wednesday June 24, 2026 9:00am - 5:00pm CEST
2-Day Training: June 23-24, 2026
Level: Intermediate
Trainer:Abhinav Singh

You may attend this training course in person or virtually

To register, please purchase your training ticket here. Training and conference are two separate ticket purchases.

Can prompt injections lead to complete infrastructure takeovers? Could AI agents be exploited to compromise backend services? Can jailbreaks create false crisis alerts in security systems? In multi-agent systems, what if an attacker takes over an agent’s goals, turning other agents into coordinated threats? This immersive, CTF-styled training in AI and LLM security dives into these pressing questions. Engage in realistic attack and defense scenarios focused on real-world threats, from prompt injection and remote code execution to backend compromise. Tackle hands-on challenges with actual AI applications & agentic systems to understand vulnerabilities and develop robust defenses. You’ll learn how to create a comprehensive security pipeline, mastering AI red and blue team strategies, building resilient defenses for AI apps & agents, and handling incident response for AI-based threats. Additionally, implement a Responsible AI (RAI) program to enforce ethical AI standards across enterprise services, fortifying your organization’s AI security foundation.

By the end of this training, you will be able to:

- Exploit vulnerabilities in AI applications to achieve code and command execution, uncovering scenarios such as instruction injection, agent control bypass, remote code execution for infrastructure takeover as well as chaining multiple agents for goal hijacking.
- Conduct AI red-teaming using adversary simulation, OWASP LLM Top 10, and MITRE ATLAS frameworks, while applying AI security and ethical principles in real-world scenarios.
- Execute and defend against adversarial attacks, including prompt injection, data poisoning, jailbreaks and agentic attacks.
- Perform advanced AI red and blue teaming through multi-agent auto-prompting attacks, implementing a 3-way autonomous system consisting of attack, defend and judge models.
- Develop LLM security scanners to detect and protect against injections, jailbreaks, manipulations, and risky behaviors, as well as defending LLMs with LLMs.
- Build and deploy enterprise-grade LLM defenses, including custom guardrails for input/output protection, security benchmarking, and penetration testing of LLM agents.
- Establish a comprehensive LLM SecOps process to secure the supply chain from adversarial attacks and create a robust threat model for enterprise applications.
- Implement an incident response and risk management plan for enterprises developing or using GenAI services.
Speakers
avatar for Abhinav Singh

Abhinav Singh

Cyber Security Research in AI,Cloud & Data, Midfield Security
Abhinav Singh is an esteemed cybersecurity leader and researcher with over a decade of experience working with global technology leaders, startups, financial institutions, and as an independent trainer and consultant. He is the author of the widely acclaimed "Metasploit Penetration... Read More →
Wednesday June 24, 2026 9:00am - 5:00pm CEST

9:00am CEST

2-Day Training: Repeatable, Scalable and Valuable Code Security Scanning
Wednesday June 24, 2026 9:00am - 5:00pm CEST
2-Day Training: June 23-24, 2026
Level: Intermediate
Trainer:Josh Grossman

To register, please purchase your training ticket here. Training and conference are two separate ticket purchases.

To learn more about this training, please visit the link here.

Suddenly anyone and everyone in your organization can use AI assistants to write code. Meanwhile, your actual developers are putting out 100x their previous output , with “varying” levels of quality. So how are you going to secure code at this scale?

This course is designed to be a deep dive into state-of-the-art techniques for validating code security within an organization’s codebase. The course has a strong emphasis on how AI-driven analysis can drive this forward whilst also clearly highlighting where standard, deterministic techniques (albeit incorporating AI acceleration) will be more effective.

During the course, you will learn how to combine these techniques, in a scalable and repeatable way, based on our experience doing just this with real organizations and real teams and with a focus on the current state of the art in this fast-moving area.

This course goes beyond the scope of standard application security knowledge and is designed to make you a specialist in this area. Having spent several years perfecting this process, we are excited to impart the lessons we have learnt!

The course is structured as follows:

* Overview – setting out the basic details of what we will be talking about in terms of code scanning and SAST.
* Key techniques – Discuss the different techniques which can be used for this including generic “off the shelf” SAST, deterministic custom scanning rules, and LLM powered custom AI prompts
* Technique comparison - Advantages and disadvantages of each technique based on our in-depth experience with each and which technique you will want to use in different situations, to avoid wasting time trying to use a technique in an inappropriate use case.
* Organizational process – How to get these processes built into an organization’s existing software lifecycle
* Generic SAST – Using “off the shelf” rules effectively to catch “low hanging fruit” and avoid reinventing the wheel.
* Custom SAST – Introduce custom rule languages (e.g., Semgrep, CodeQL), writing rules from scratch, and scaling analysis across a codebase.
* Basic AI Code Security Scanning – Overview of AI-based scanning, platforms, principles, and initial single-shot prompts.
* Complex AI Code Security Scanning – AI-driven techniques for code security, including using AI to review and triage findings and creating multi-stage rules that combine deterministic rules
Speakers
avatar for Josh Grossman

Josh Grossman

CTO, Bounce Security
Josh Grossman has worked as a consultant in IT and Application Security and Risk for 15 years now, as well as a Software Developer. This has given him an in-depth understanding of how to manage the balance between business needs, developer needs and security needs which goes into... Read More →
Wednesday June 24, 2026 9:00am - 5:00pm CEST

9:00am CEST

2-Day Training: The Mobile Playbook - A guide for iOS and Android App Security (Hybrid)
Wednesday June 24, 2026 9:00am - 5:00pm CEST
2-Day Training: June 23-24, 2026
Level: Intermediate
Trainer:Sven Schleier

You may attend this training course in person or virtually.

To register, please purchase your training ticket here. Training and conference are two separate ticket purchases.

This two-day, hands-on course is designed to teach penetration testers, developers, and engineers how to analyse Android and iOS applications for security vulnerabilities. The course covers the different phases of testing, including dynamic testing, static analysis, reverse engineering, and software composition analysis (SCA). We will also explore how you can use the Model Context Protocol (MCP) to automate some of these workflows and leverage its strengths.

The course is based on the OWASP Mobile Application Security Testing Guide (MASTG) and taught by one of the project co-leaders. This comprehensive, open-source mobile security testing book covers both iOS and Android, providing a methodology and detailed technical test cases to ensure completeness and utilizes the latest attack techniques against mobile applications. This course provides hands-on experience with open-source tools and advanced methodologies, guiding you through real-world scenarios.

Detailed outline

On the first day, we will start with an introduction to the OWASP MASVS and MASTG projects, including the latest updates. Then, we will dive into the Android platform and its security architecture. Students will no longer be required to bring their own Android device; instead, each student will be provided with a cloud-based, virtualised Android device from Corellium.

Topics include:

- Intercepting network traffic of an Android App in various scenarios, including intercepting traffic that is not HTTP.
- Scanning for secrets in an APK.
- Reverse engineering a Kotlin app and identifying and exploiting a real-world deep link vulnerability through manual source code review.
- Static Scanning of decompiled Kotlin source code by using MCP workflows with semgrep and radare2, identifying vulnerabilities and eliminating false positives.
- Frida crash course to get started with dynamic instrumentation on Android apps by using MCP workflows.
- Use dynamic instrumentation with Frida to bypass client-side security controls such as root detection mechanisms.
- We will close day 1 with a Capture the Flag (CTF) by attacking several apps, including a real world app and overcome it's protection mechanisms.

Day 2 focuses on iOS. We will begin the day by exploring the OWASP MASWE and creating an iOS test environment using Corellium and dive into several topics, including:

- Introduction into iOS Security fundamentals
- Intercepting network traffic of an iOS App in various scenarios, including intercepting traffic from apps written in mobile app frameworks such as Google's Flutter.
- How to retrieve an IPA, execute static scanning of an IPA and identifying vulnerabilities and eliminating false positives.
- Software Composition Analysis (SCA) for iOS by using SBOM's and scanning 3rd party libraries and SDKs in mobile package managers for known vulnerabilities and planning mitigation strategies.
- Frida crash course to get started with dynamic instrumentation for iOS applications and utilsing MCP workflows.
- Testing methodology with a non-jailbroken (jailed) device by repackaging an IPA with the Frida gadget.
- Analyse the storage of an iOS app and understand the various options on how (files, databases, logs etc.) and where files can be stored.
- Using Frida to bypass runtime instrumentation of iOS applications, like anti-Jailbreaking Mechanisms.

We'll wrap up the final day with a CTF and participants can win a prize!

Whether you are a beginner who wants to learn mobile app testing from the ground up, or an experienced pentester or developer or engineer who wants to improve your existing skills to perform more advanced attack techniques, this training will help you achieve your goals.

The course consists of many different hands-on labs developed by the instructor or using real world apps that are part of bug bounty platforms.

Upon successfully completing this course, students will have a better understanding of how to test for vulnerabilities in mobile applications, how to recommend appropriate mitigation techniques to developers and how to perform consistent and efficient testing using MCP (Model Context Protocol) workflows.
Speakers
avatar for Sven Schleier

Sven Schleier

Co-Founder, Bai7 GmbH
Sven is a co-founder of Bai7 GmbH in Austria, which is specialized in trainings and advisory. He has expertise in cloud security, offensive security engagements (Penetration Testing) and Application Security, notably in guiding software development teams across Mobile and Web Applications... Read More →
Wednesday June 24, 2026 9:00am - 5:00pm CEST

9:00am CEST

3-Day Training: Full-Stack Pentesting Laboratory: 100% Hands-On + Lifetime LAB Access
Wednesday June 24, 2026 9:00am - 5:00pm CEST
3-Day Training:June 22-24, 2026
Level: Intermediate
Trainer: Dawid Czagan


To register, please purchase your training ticket here. Training and conference are two separate ticket purchases.

Modern IT systems are increasingly complex, making full-stack expertise more essential than ever. That's why diving into full-stack pentesting is crucial—you will gain the skills needed to master modern attack vectors and implement effective defensive countermeasures.

For each attack, vulnerability and technique presented in this training, there is a lab exercise to help you develop your skills step by step. What's more, when the training is over, you can take the complete lab environment home to hack again at your own pace.

I found security bugs in many companies including Google, Yahoo, Mozilla, Twitter and in this training I'll share my experience with you.

Key Learning Objectives
After completing this training, you will have learned about:

- Hacking cloud applications
- API hacking tips & tricks
- Data exfiltration techniques
- OSINT asset discovery tools
- Tricky user impersonation
- Bypassing protection mechanisms
- CLI hacking scripts
- Interesting XSS attacks
- Server-side template injection
- Hacking with Google & GitHub search engines
- Automated SQL injection detection and exploitation
- File read & file upload attacks
- Password cracking in a smart way
- Hacking Git repos
- XML attacks
- NoSQL injection
- HTTP parameter pollution
- Web cache deception attack
- Hacking with wrappers
- Finding metadata with sensitive information
- Hijacking NTLM hashes
- Automated detection of JavaScript libraries with known vulnerabilities
- Extracting passwords
- Hacking Electron applications
- Establishing reverse shell connections
- RCE attacks
- XSS polyglot
- and more …

What Students Will Receive
Students will be handed in a VMware image with a specially prepared lab environment to play with all attacks, vulnerabilities and techniques presented in this training. When the training is over, students can take the complete lab environment home (after signing a non-disclosure agreement) to hack again at their own pace.

Special Bonus
The ticket price includes FREE access to my 6 online courses:

- Fuzzing with Burp Suite Intruder
- Exploiting Race Conditions with OWASP ZAP
- Case Studies of Award-Winning XSS Attacks: Part 1
- Case Studies of Award-Winning XSS Attacks: Part 2
- How Hackers Find SQL Injections in Minutes with Sqlmap
- Web Application Security Testing with Google Hacking

What Students Say About My Trainings
References are attached to my LinkedIn profile (https://www.linkedin.com/in/dawid-czagan-85ba3666/). They can also be found here: https://silesiasecuritylab.com/services/training/#opinions – training participants from companies such as Oracle, Adobe, ESET, ING, Red Hat, Trend Micro, Philips, government sector...
 
What Students Should Know
To get the most of this training intermediate knowledge of web application security is needed. Students should have experience in using a proxy, such as Burp Suite Proxy or Zed Attack Proxy (ZAP), to analyze or modify the traffic.

What Students Should Bring

Students will need a laptop with 64-bit operating system, at least 8 GB RAM, 35 GB free hard drive space, administrative access, ability to turn off AV/firewall and VMware Player/Fusion installed (64-bit version). Prior to the training, make sure there are no problems with running x86_64 VMs.

Additional notes

This new 3-day training was sold out at top security conferences e.g. DEF CON (Las Vegas), Hack In Paris (Paris).

This is a 100% hands-on training: for each attack, vulnerability and technique presented in this training, there is a lab exercise to help students develop their skills step by step.

Speakers
avatar for Dawid Czagan

Dawid Czagan

Founder and CEO, Silesia Security Lab
Dawid Czagan is an internationally recognized security researcher and trainer. He is listed among top hackers at HackerOne. Dawid Czagan has found security bugs in Apple, Google, Mozilla, Microsoft and many others.

Due to the severity of many bugs, he received numerous awards for his findings. Dawid Czagan shares his security experience in his hands-on trainings. He delivered trainings at key industry conferences such as DEF CON (Las Vegas), OWASP 2025 Global AppSec EU (Barcelona), Hack In The... Read More →
Wednesday June 24, 2026 9:00am - 5:00pm CEST

9:00am CEST

3-Day Training:AI Whiteboard Hacking aka Hands-on Threat Modeling Training
Wednesday June 24, 2026 9:00am - 5:00pm CEST
To register, please purchase your training ticket here. Training and conference are two separate ticket purchases.

3-Day Training: June 22-24, 2026
Level: Beginner
TrainerSebastien Deleersnyder

Download the complete training outline: AI Whiteboard Hacking Training Details

Testimonial: "After years evaluating security trainings at Black Hat, including Toreon's Whiteboard Hacking sessions, I can say this AI threat modeling course stands out. The hands-on approach and flow are exceptional - it's a must-attend."
- Daniel Cuthbert, Global Head of Cyber Security Research, Black Hat Review Board Member

In today's rapidly evolving AI landscape, security threats like prompt injection and data poisoning pose significant risks to AI systems. Our 3-day AI Whiteboard Hacking training equips you with practical skills to identify, assess, and mitigate AI-specific security threats using our proven DICE methodology. Through hands-on exercises and real-world scenarios, you'll learn to build secure AI systems while ensuring compliance with regulations like the EU AI Act.

The training concludes with an engaging red team/blue team wargame where you'll put theory into practice by attacking and defending a rogue AI research assistant. Upon completion, you'll earn the AI Threat Modeling Practitioner Certificate and gain access to a year-long subscription featuring quarterly masterclasses, expert Q&A sessions, and continuously updated resources.

Led by Sebastien Deleersnyder, co-founder and CTO of Toreon, and Black Hat trainer, this training combines technical expertise with practical insights gained from real-world projects across government, finance, healthcare, and technology sectors.

Quick Overview:
·       Target Audience: AI Engineers, Software Engineers, Solution Architects, Security Professionals
·       Prerequisites: Basic understanding of AI concepts (pre-training materials provided)
·       Certification: AI Threat Modeling Practitioner Certificate
·       Bonus: 1-year AI Threat Modeling Subscription included

Our lineup of the hands-on exercises from the training that let you put AI security concepts into practice:

Day 1: Foundations & Methodology
·       "AI Security Headlines from the Future" - Explore potential security scenarios
·       "Diagramming the AI Assistant Infrastructure" - Map out real AI system components
Speakers
avatar for Sebastien Deelersnyder

Sebastien Deelersnyder

Co-Founder and CEO, Toreon
Sebastien Deleersnyder, also known as Seba, is a highly accomplished individual in the field of cybersecurity. He is the CTO and co-founder of Toreon, as well as the COO and lead threat modeling trainer of Data Protection Institute. Seba holds a Master's degree in Software Engineering... Read More →
Wednesday June 24, 2026 9:00am - 5:00pm CEST
 
Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.
Filtered by Date -