Red Teaming GenAI: Securing Systems from the Inside Out

About

In today’s AI-driven world, traditional cybersecurity isn’t enough. Generative AI systems can be exploited in new and unexpected ways—and that’s where AI Red Teaming comes in. Think of it as offensive security for your models, probing them before real attackers do.

In this hands-on session, we’ll unpack how red teaming works for GenAI: from simulating real-world attacks and prompt injection to uncovering hidden, risky capabilities. You’ll learn practical methodologies adversarial simulation, targeted testing, and capability evaluation, as well as how to operationalize them at scale.

We’ll also explore frameworks like the MITRE ATLAS Matrix, compliance alignment with NIST AI RMF and the EU AI Act, and must-know tools like Garak, PyRIT, and ART.

By the end, you’ll walk away with a practical playbook to proactively harden your AI systems, detect emerging threats, and build secure, responsible GenAI applications before adversaries get there first.

Key Takeaways:

  • Learn how AI red teaming stress-tests generative systems using real-world adversarial attacks and threat simulations.
  • Discover structured red teaming methods from capability probing to targeted adversarial testing mapped to frameworks like MITRE ATLAS.
  • Explore essential open-source tools like PyRIT, Garak, and ART that automate red teaming workflows at scale.
  • Understand how red teaming fortifies GenAI systems against threats like prompt injection and model misuse, while aligning with AI regulations.

Speaker

Book Tickets
Download Brochure

Download agenda