Pratyush Kumar

Pratyush Kumar

Co-Founder

Sarvam AI

Dr. Pratyush Kumar is the Co-founder of Sarvam and a leading voice in India’s AI ecosystem. A two-time founder, he previously built AI4Bharat and OneFourth Labs, both instrumental in advancing open-source AI for Indian languages. AI conferences and journals. Prior to founding Sarvam, Dr. Kumar was a researcher at Microsoft Research and IBM, where he worked on cutting-edge problems in machine learning and natural language processing. He has published over 89 research papers at top-tier conferences and journals, contributing to both academic and applied advances in the field. Dr. Kumar holds degrees from IIT Bombay and ETH Zurich and continues to build AI that reaches every corner of the country.

As AI becomes a cornerstone of global influence, India must chart its own path, not to isolate, but to secure strategic autonomy. This session explores why developing a Sovereign AI Ecosystem is critical for addressing India’s unique socio-economic and linguistic diversity, while ensuring our voice shapes the global AI discourse.

We'll discuss the urgent need for domestic investment in compute and storage infrastructure, enabling foundational model development to remain within national borders, delivering resilience, control, and security at scale.

Equally vital is nurturing an AI innovation ecosystem where Indian developers, startups, and researchers build solutions rooted in local relevance with global potential.

Finally, we’ll spotlight the importance of hands-on GenAI education to cultivate a deep talent pipeline and fuel long-term innovation. Join us to understand how India can lead responsibly in the AI era—with strength, inclusivity, and sovereignty at its core.

Read More

Managing and scaling ML workloads have never been a bigger challenge in the past. Data scientists are looking for collaboration, building, training, and re-iterating thousands of AI experiments. On the flip side ML engineers are looking for distributed training, artifact management, and automated deployment for high performance

Read More

Managing and scaling ML workloads have never been a bigger challenge in the past. Data scientists are looking for collaboration, building, training, and re-iterating thousands of AI experiments. On the flip side ML engineers are looking for distributed training, artifact management, and automated deployment for high performance

Read More