Gauri Kholkar

Gauri Kholkar

Machine Learning Engineer

Pure Storage

Gauri is a seasoned Applied AI/ML Engineer with 8 years of experience, currently developing cutting-edge LLM applications for next-generation storage solutions within Pure Storage's Office of the CTO. Previously at Microsoft, she engineered responsible AI models and data pipelines for Bing, impacting over 100 million users. Her research in content moderation and multilingual model finetuning has been recognized at top AI conferences like AAAI 2023 and COLING 2025; her paper "Socio-Culturally Aware Evaluation Framework for LLM-Based Content Moderation" was accepted at COLING 2025. Gauri's expertise is further acknowledged through her service as a reviewer for top-tier venues like ICLR and ACL 2025. Gauri holds a Computer Science degree from BITS Pilani.

This talk will explore the critical aspects of securing GenAI applications, beginning with the unique security challenges they introduce. We will examine key vulnerabilities in depth, including manipulative prompt injection attacks, jailbreaks designed to bypass safety controls, risks related to sensitive data leakage, the generation of inaccurate hallucinations, and the dangers of improper model output handling. The agenda focuses on providing actionable insights through effective mitigation strategies, methods for early vulnerability identification, and adherence to proven best practices, ultimately aiming to equip attendees with the knowledge to build secure, resilient, and trustworthy LLM-powered systems while minimizing deployment risks.

Read More

Managing and scaling ML workloads have never been a bigger challenge in the past. Data scientists are looking for collaboration, building, training, and re-iterating thousands of AI experiments. On the flip side ML engineers are looking for distributed training, artifact management, and automated deployment for high performance

Read More

Managing and scaling ML workloads have never been a bigger challenge in the past. Data scientists are looking for collaboration, building, training, and re-iterating thousands of AI experiments. On the flip side ML engineers are looking for distributed training, artifact management, and automated deployment for high performance

Read More