Loading…
Thursday June 25, 2026 10:30am - 11:15am CEST
AI is tightening its grip on security operations, but when no one can explain what a system is doing, accountability breaks down and attackers gain the edge. Regulations like the EU AI Act now require AI systems to be transparent, yet most organizations lack a concrete way to measure what “transparent” actually means. The AI Explainability Scorecard fills that gap by providing a fast, practical way to assess whether an AI system is traceable and defensible, scoring it on faithfulness, comprehensibility, consistency, accessibility, and operational clarity, including for LLM-based systems. The takeaway is clear: if you cannot explain the results of your AI, it is running your business, not your people.
Speakers
avatar for Michael Novack

Michael Novack

Solution Architect, Aiceberg

Michael is a product-minded security architect who loves turning tangled AI risks into clear, practical solutions. As Solution Architect at Aiceberg, he helps enterprises bake AI explainability and real-time monitoring straight into their systems, transforming real customer insights... Read More →
Thursday June 25, 2026 10:30am - 11:15am CEST
Hall D (Level -2)

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link