Daksh Varshneya

Daksh Varshneya

Senior Product Manager

Rasa

With over 6 years of experience in the conversational AI field, Daksh Varshneya currently leads the machine learning product vertical at Rasa. Their journey began as a machine learning researcher, where they made significant contributions to open-source repositories including TensorFlow, scikit-learn, and Rasa OSS. Holding a Master's degree in Computer Science from IIIT Bangalore, Daksh now focuses on helping Fortune 500 enterprises successfully implement LLM-based conversational AI solutions at scale, enabling billions of end-user conversations annually. Their expertise bridges the gap between cutting-edge AI research and enterprise-level practical implementation.

Traditional function calling approaches for LLM-powered conversational agents suffer from unpredictable execution, poor debugging capabilities, and the "prompt and pray" problem that kills development velocity. This talk introduces Process Calling - a superior paradigm where LLMs invoke stateful, multi-step processes rather than atomic tools.

Unlike function calling's stateless operations, Process Calling enables agents to maintain context and execute deterministic business logic across conversation flows. Using real production examples, we'll show how Rasa's CALM framework implements Process Calling to build conversational agents that actually work in customer-facing scenarios. Come learn why the future of customer facing agent development isn't about better prompts—it's about better abstractions.

Read More

Managing and scaling ML workloads have never been a bigger challenge in the past. Data scientists are looking for collaboration, building, training, and re-iterating thousands of AI experiments. On the flip side ML engineers are looking for distributed training, artifact management, and automated deployment for high performance

Read More

Managing and scaling ML workloads have never been a bigger challenge in the past. Data scientists are looking for collaboration, building, training, and re-iterating thousands of AI experiments. On the flip side ML engineers are looking for distributed training, artifact management, and automated deployment for high performance

Read More