Kuldeep Jiwani

Kuldeep Jiwani

VP, Head of AI Solutions

ConcertAI

Kuldeep is currently the Head of AI Solutions at ConcertAI, where he leads the development of LLM, SLM, and Generative AI-based solutions focused on analyzing patient clinical notes for oncology researchers. With over two decades of experience in AI/ML research and high-performance computing architectures, he has successfully built numerous innovative, real-world AI products. Kuldeep has an active research background, with multiple publications in reputed international journals and granted U.S. patents.

 
Prior to joining ConcertAI, he served as Head of the Data Science division at HiLabs, where he led the development of six successful products within two years. Throughout his career, he has led global data science teams and designed large-scale Big Data solutions across the healthcare, telecommunications, and financial sectors. Kuldeep has also been an entrepreneur, contributing to technology startups, including one that achieved a successful acquisition by Oracle.

Large Language Models (LLMs) are redefining NLP with their remarkable reasoning capabilities, but they still hallucinate, making up facts that can derail decision-critical tasks like clinical trial matching or medical entity extraction. In this session, we’ll explore how understanding and quantifying uncertainty can help tackle this reliability gap.

We’ll demystify uncertainty vs. confidence, break down aleatoric vs. epistemic uncertainty, and walk through estimation techniques for white-box (e.g., LLaMA), grey-box (e.g., GPT-3), and black-box (e.g., GPT-4) models. Expect hands-on demonstrations using open-source LLMs and tools, with a reality check on why SoftMax scores alone can be misleading.

We’ll also shine a spotlight on Small Language Models (SLMs) on why they’re not just cheaper, but potentially more predictable and controllable, offering a compelling alternative for hallucination-sensitive use cases.

Whether you're deploying LLMs in production or experimenting with SLMs, this talk will equip you with tools to make your models more trustworthy.

Read More

Managing and scaling ML workloads have never been a bigger challenge in the past. Data scientists are looking for collaboration, building, training, and re-iterating thousands of AI experiments. On the flip side ML engineers are looking for distributed training, artifact management, and automated deployment for high performance

Read More

Managing and scaling ML workloads have never been a bigger challenge in the past. Data scientists are looking for collaboration, building, training, and re-iterating thousands of AI experiments. On the flip side ML engineers are looking for distributed training, artifact management, and automated deployment for high performance

Read More