Daksh Varshneya

Daksh Varshneya

Senior Product Manager

Rasa

With over 6 years of experience in the conversational AI field, Daksh Varshneya currently leads the machine learning product vertical at Rasa. Their journey began as a machine learning researcher, where they made significant contributions to open-source repositories including TensorFlow, scikit-learn, and Rasa OSS. Holding a Master's degree in Computer Science from IIIT Bangalore, Daksh now focuses on helping Fortune 500 enterprises successfully implement LLM-based conversational AI solutions at scale, enabling billions of end-user conversations annually. Their expertise bridges the gap between cutting-edge AI research and enterprise-level practical implementation.

Voice-based GenAI assistants promise a new era of intuitive interaction—but making them fast and reliable is still a major challenge. This session cuts through the hype to explore what it really takes to build high-performance conversational agents that users can trust.

We’ll start by comparing popular LLMs in real-world agentic scenarios, analyzing where they shine—and where they stumble—especially when balancing accuracy with response speed. Then, we introduce CALM: a structured framework for designing responsive, trustworthy AI agents, built with latency, precision, and user trust in mind.

You’ll also learn a semi-automated fine-tuning workflow that combines data augmentation and model distillation—empowering smaller models like Llama 3 8B to rival GPT-4o in accuracy, at 3x the speed.

The session wraps with a live demo and full access to code and slides. Whether you’re building voice agents or scaling assistant infrastructure, this session is packed with practical insights you can apply today.

Read More

Managing and scaling ML workloads have never been a bigger challenge in the past. Data scientists are looking for collaboration, building, training, and re-iterating thousands of AI experiments. On the flip side ML engineers are looking for distributed training, artifact management, and automated deployment for high performance

Read More

Managing and scaling ML workloads have never been a bigger challenge in the past. Data scientists are looking for collaboration, building, training, and re-iterating thousands of AI experiments. On the flip side ML engineers are looking for distributed training, artifact management, and automated deployment for high performance

Read More