Dr. Kiran R

Dr. Kiran R

Vice President of Engineering

Oracle

Dr Kiran R is the Vice President of Engineering at Oracle, where he drives the GenAI-first product suite for the Health group on Oracle Cloud Infrastructure (OCI). In his prior role as Partner Director & General Manager at Microsoft, he was the Co-pilot Engineering leader & the leader of Applied ML & ML Engineering in Microsoft Cloud Data Sciences on Azure. He has experience driving concept-completion-production ML projects, building out on-prem and on-the-cloud MLOps platforms while conceptualizing & scaling extensible ML services. He has a track record of driving impact through incorporation of ML into products & solutions. He was also Senior Director of ML at VMware.

Kiran has 40+ filed & granted US patents. He is a Kaggle competitions grandmaster (one of ~100 WW) and had a highest WW rank of 7. He is a prize winner in the prestigious KDD Data Mining Cup. He is recipient of the CTO award at VMware and Innovator of the year award from Michael Dell in person.

Generative AI is driving the biggest platform shift since the advent of the internet, transforming every industry by reshaping customer service, software development, marketing, HR, and beyond. However, many organizations face a gap between GenAI’s promise and its actual performance. Unlike traditional ML, GenAI systems are harder to evaluate due to their subjective, multimodal, and human-in-the-loop nature. This session explores the critical need for robust GenAI evaluation frameworks across technical aspects (like prompt evaluation, red teaming, and reproducibility), observability (including production logging and cost monitoring), and business metrics (such as ROI, service improvements, and responsible AI measures).

We’ll contrast GenAI and traditional ML evaluation methods and introduce a holistic framework that includes ground truth creation via gold/silver datasets. Through real-world case studies in Enterprise and HealthTech—including recommender systems, auto form filling, de-identification, and structured note generation—we’ll show how to evaluate GenAI systems effectively both pre- and post-production. The session will highlight key tools and techniques that enhance GenAI evaluation usability, especially for complex tasks like summarization and compliance.

Read More

Managing and scaling ML workloads have never been a bigger challenge in the past. Data scientists are looking for collaboration, building, training, and re-iterating thousands of AI experiments. On the flip side ML engineers are looking for distributed training, artifact management, and automated deployment for high performance

Read More

Managing and scaling ML workloads have never been a bigger challenge in the past. Data scientists are looking for collaboration, building, training, and re-iterating thousands of AI experiments. On the flip side ML engineers are looking for distributed training, artifact management, and automated deployment for high performance

Read More