Sanathraj Narayan

Sanathraj Narayan

Data Science Manager

About

Sanath brings over a decade of experience in AI/ML, data science, and analytics, with a strong track record of building and deploying machine learning solutions at scale. Before joining Lam, he was a Senior Data Scientist at Ericsson, where he led model development and implementation for network rollout and forecasting use cases. He has also worked at Mindtree and KPMG, focusing on predictive analytics, scalable ML models, and enterprise AI solutions. Sanath is passionate about industrializing AI/ML models and driving real-world impact. He has been an active speaker at AI/ML conferences like Cypher and DataHack Summit, sharing insights on LangChain and LLM-based applications.

This session provides a hands-on, engineering-focused comparison of Large Language Models (LLMs) and Small Language Models (SLMs) in real-world applications. Participants will see both models implemented side by side across two key paradigms: Retrieval-Augmented Generation (RAG) and agentic workflows. We start by building a RAG pipeline with an LLM and replicate it using an SLM, comparing performance across quality, latency, cost, and consistency. The session then extends to a simple multi-agent workflow (planner–executor), evaluating both approaches on reasoning, tool usage, and robustness, along with the impact of optimizations like prompt design, fine-tuning, and memory. By the end, participants will gain a practical framework for choosing between LLMs and SLMs based on use case, constraints, and scale. 

Read More →

This session provides a comprehensive introduction to Small Language Models (SLMs), covering what they are, why they matter, and how they fit within modern Generative AI systems alongside Large Language Models (LLMs). It will explore key trade-offs across size, cost, latency, accuracy, and sustainability, along with the core architectural principles behind lightweight transformer-based models.

The session will also cover essential techniques such as knowledge distillation, parameter-efficient fine-tuning, and model optimization approaches including quantization and pruning.

Building on these foundations, the workshop will transition into real-world implementation. Through progressive, hands-on exercises, participants will design, build, evaluate, and iteratively improve a multi-agent system—starting with baseline SLMs and enhancing performance using fine-tuned models and memory-augmented approaches.

In addition, the session will highlight practical scenarios where SLMs are most effective, including edge and on-device AI, high-volume low-cost workloads, real-time systems, domain-specific applications, privacy-sensitive use cases, and multi-agent systems for cost-efficient, production-grade AI deployments.

Read More →