Prompt Injection & Adversarial Attacks on AI Agents

Hack Session

About the session

AI agents that use tools, APIs, and external data sources introduce a new attack surface: prompt injection and adversarial manipulation.
 
Unlike traditional software vulnerabilities, AI agent attacks exploit reasoning chains, tool access permissions, and hidden context — often without triggering traditional security systems.
 
This session dives deep into:
 
- Prompt injection mechanics
- Tool hijacking
- Data exfiltration via LLM reasoning
- Jailbreak chains in autonomous agents
- Defensive architecture for secure agent systems
 
We will explore how to design secure and reliable agentic AI systems that are resilient in adversarial environments.

Speaker

Download Brochure