Deploying GenAI Safely: Strategies for Trustworthy LLMs

About

This talk will explore the critical aspects of securing GenAI applications, beginning with the unique security challenges they introduce. We will examine key vulnerabilities in depth, including manipulative prompt injection attacks, jailbreaks designed to bypass safety controls, risks related to sensitive data leakage, the generation of inaccurate hallucinations, and the dangers of improper model output handling. The agenda focuses on providing actionable insights through effective mitigation strategies, methods for early vulnerability identification, and adherence to proven best practices, ultimately aiming to equip attendees with the knowledge to build secure, resilient, and trustworthy LLM-powered systems while minimizing deployment risks.

Key Takeaways:

  • Understand the unique security vulnerabilities GenAI applications face, from prompt injections to data leaks.
  • Explore actionable mitigation strategies to protect LLM-powered systems against emerging threats.
  • Learn best practices for early identification and handling of GenAI vulnerabilities.
  • Gain insights into building secure, resilient, and trustworthy GenAI applications for real-world deployment.

Speaker

video thumbnail
Book Tickets
Download Brochure

Download agenda