Shubhradeep Nandi

Shubhradeep Nandi

Chief Data Scientist

Government of Andhra Pradesh

Shubhradeep Nandi is a GenAI researcher and entrepreneur with over 16 years of professional experience, including more than a decade dedicated to Artificial Intelligence and Machine Learning. He is widely recognized for his pioneering contributions to applied Large Language Models (LLMs), with his research on LLM applications in Climate Science earning the prestigious ‘Highly Commendable Work’ recognition from IIM Bangalore. Named among India’s Top 7 GenAI Scientists, Shubhradeep has been lauded for his impactful GenAI innovations in Financial Fraud Management-most notably, developing a government-backed AI system to detect Non-Genuine Taxpayers. As the architect of the first Data Analytics Unit for Government, he transformed it into a model of success. He is also an Innovator in Residence at a global venture fund and is the founder of both a pioneering social payments startup and a deep-tech compliance platform. In addition to his research and ventures, Shubhradeep is a passionate mentor and advisor to emerging AI SaaS startups through leading VC platforms.

In the rapidly evolving AI ecosystem, large language models (LLMs) and autonomous agents have become central to decision-making systems-from fraud detection and credit scoring to welfare distribution. However, these systems operate on probabilities and confidence scores, not absolutes. That poses a critical challenge: How do we ensure fairness, accountability, and trust when AI decisions are inherently uncertain?

This talk offers a deep dive into aligning Responsible AI principles with the probabilistic nature of modern AI systems. We explore how to architect systems that not only predict, but also explain, justify, and remain auditable drawing from real-world implementations in financial oversight.

We will show the following:

  • A credit risk assessment agent that explains its eligibility score and confidence band using real DIA data.
  • A welfare benefit approval system where LLM outputs come with rational visualizations for auditors and beneficiaries.
  • A fraud detection tool that flags risky transactions and lets auditors explore the model's reasoning trail before acting.

This session will empower AI Engineers, Data Scientists, Business Leaders, Auditors, and policymakers alike to navigate probabilistic AI outcomes without compromising on transparency, ethics, or stakeholder trust.

Read More

Managing and scaling ML workloads have never been a bigger challenge in the past. Data scientists are looking for collaboration, building, training, and re-iterating thousands of AI experiments. On the flip side ML engineers are looking for distributed training, artifact management, and automated deployment for high performance

Read More

Managing and scaling ML workloads have never been a bigger challenge in the past. Data scientists are looking for collaboration, building, training, and re-iterating thousands of AI experiments. On the flip side ML engineers are looking for distributed training, artifact management, and automated deployment for high performance

Read More