Nitin Agarwal

Nitin Agarwal

Principal Data Scientist

About

Nitin Agarwal is a generative AI leader with deep expertise in Large Language Models (LLMs), Natural Language Processing, Machine Learning, and intelligent automation. Passionate about turning cutting-edge AI capabilities into practical, production-grade systems that drive measurable business value. 

Extensive experience developing end-to-end AI platforms, LLM-powered applications, and intelligent systems that enable decision intelligence, knowledge discovery, and next-generation digital experiences. Skilled at translating complex AI concepts into scalable enterprise solutions that enhance productivity, improve customer engagement, and unlock new growth opportunities. 

Known for leading high-impact AI initiatives, guiding cross-functional teams, and fostering a culture of innovation, experimentation, and continuous learning. Strong focus on bridging business strategy with emerging AI technologies to deliver responsible, scalable, and impactful AI solutions. 

Active contributor to the AI ecosystem as a mentor, speaker, and thought leader, with a strong interest in advancing the practical adoption of Generative AI and shaping the future of intelligent systems. 

Driven by a mission to push the boundaries of AI—building technologies that empower people, transform enterprises, and redefine what intelligent systems can achieve.

This session provides a comprehensive introduction to Small Language Models (SLMs), covering what they are, why they matter, and how they fit within modern Generative AI systems alongside Large Language Models (LLMs). It will explore key trade-offs across size, cost, latency, accuracy, and sustainability, along with the core architectural principles behind lightweight transformer-based models.

The session will also cover essential techniques such as knowledge distillation, parameter-efficient fine-tuning, and model optimization approaches including quantization and pruning.

Building on these foundations, the workshop will transition into real-world implementation. Through progressive, hands-on exercises, participants will design, build, evaluate, and iteratively improve a multi-agent system—starting with baseline SLMs and enhancing performance using fine-tuned models and memory-augmented approaches.

In addition, the session will highlight practical scenarios where SLMs are most effective, including edge and on-device AI, high-volume low-cost workloads, real-time systems, domain-specific applications, privacy-sensitive use cases, and multi-agent systems for cost-efficient, production-grade AI deployments.

Read More →