Nitin Agarwal

Nitin Agarwal

Principal Data Scientist

About

Nitin Agarwal is a generative AI leader with deep expertise in Large Language Models (LLMs), Natural Language Processing, Machine Learning, and intelligent automation. Passionate about turning cutting-edge AI capabilities into practical, production-grade systems that drive measurable business value. 

Extensive experience developing end-to-end AI platforms, LLM-powered applications, and intelligent systems that enable decision intelligence, knowledge discovery, and next-generation digital experiences. Skilled at translating complex AI concepts into scalable enterprise solutions that enhance productivity, improve customer engagement, and unlock new growth opportunities. 

Known for leading high-impact AI initiatives, guiding cross-functional teams, and fostering a culture of innovation, experimentation, and continuous learning. Strong focus on bridging business strategy with emerging AI technologies to deliver responsible, scalable, and impactful AI solutions. 

Active contributor to the AI ecosystem as a mentor, speaker, and thought leader, with a strong interest in advancing the practical adoption of Generative AI and shaping the future of intelligent systems. 

Driven by a mission to push the boundaries of AI—building technologies that empower people, transform enterprises, and redefine what intelligent systems can achieve.

AI agents that use tools, APIs, and external data sources introduce a new attack surface: prompt injection and adversarial manipulation.
 
Unlike traditional software vulnerabilities, AI agent attacks exploit reasoning chains, tool access permissions, and hidden context — often without triggering traditional security systems.
 
This session dives deep into:
 
- Prompt injection mechanics
- Tool hijacking
- Data exfiltration via LLM reasoning
- Jailbreak chains in autonomous agents
- Defensive architecture for secure agent systems
 
We will explore how to design secure and reliable agentic AI systems that are resilient in adversarial environments.
Read More →

This full-day, hands-on workshop equips participants to design, build, and optimize real-world AI systems powered by Small Language Models (SLMs). Unlike traditional LLM-heavy approaches, this workshop focuses on cost-efficient, production-aware architectures that run entirely within Google Colab’s free tier—making advanced AI engineering accessible without expensive compute infrastructure. 

Participants will progress through six tightly integrated modules, building toward a complete multi-agent, RAG-enabled AI system that solves a real-world problem. Every module includes end-to-end hands-on demos with pre-configured notebooks that participants take home after the session. 

Key Learning Outcomes 

By the end of this workshop, participants will be able to: 

  • Deploy and run inference with SLMs (Phi-3 Mini, Gemma 2B, TinyLlama) within Google Colab free-tier limits 
  • Apply quantization techniques  and parameter-efficient fine-tuning with QLoRA for domain-specific tasks 
  • Build a lightweight RAG pipeline with vector search, connecting fine-tuned SLMs to external knowledge bases 
  • Design and orchestrate multi-agent workflows using role-specialized SLMs with shared state and minimal memory overhead 
  • Architect an end-to-end Agentic RAG system combining retrieval, reasoning, and generation under constrained compute 
  • Simulate edge deployment using llama.cpp with GGUF models, profiling latency and optimizing for CPU-first execution 
Read More →