Dipanjan Sarkar is currently the Head of Artificial Intelligence & Community, Analytics Vidhya. He is also a published Author, and Consultant, boasting a decade of extensive expertise in Machine Learning, Deep Learning, Generative AI, Computer Vision, and Natural Language Processing. His leadership spans Fortune 100 enterprises to startups, crafting end-to-end AI products and pioneering Generative AI upskilling programs. A seasoned advisor, Dipanjan advises a diverse clientele, from Engineers and Architects to C-suite executives and PhDs, across Advanced Analytics, AI Strategy & Development. Recognitions include "Top 10 Data Scientists in India, 2020," "40 under 40 Data Scientists, 2021," "Google Developer Expert in Machine Learning, 2019," and "Top 50 AI Thought Leaders, Global AI Hub, 2022,", Google Champion Innovator title in Cloud AI\ML, 2022 alongside global accolades including Top 100 Influential AI Voices in LinkedIn.
New to the world of Agentic AI and want to quickly get proficient in the key aspects of learning, building, deploying and monitoring Agentic AI Systems? This is the workshop for you! In this workshop you will get a comprehensive coverage of the breadth as well as deep dive into the depth of the vast world of Agentic AI Systems. Over the course of six modules, you will spend the entire day focusing on the following key areas:
- Learn essential concepts of Generative AI, Agentic AI and Agentic RAG Systems
- Deep dive into industry standard design patterns for architecting Agentic AI Systems - Tool-Use, Reflection, Planning, Multi-Agent
- Leverage Industry-Standard frameworks including LangChain, LangGraph and CrewAI to build simple and advanced Agentic AI & Agentic RAG Systems
- Learn basics of how to deploy Agentic AI Systems as APIs as well as debug and monitor them
While we want to keep the discussions as framework and tool-agnostic as possible, since 90% of the workshop will be hands-on focused; we will be using LangChain and LangGraph (currently the leading framework used in the industry) for most of the hands-on demos for building Agents and also a bit of CrewAI. While the focus of the workshop is more on building Agentic AI Systems we will also showcase how you can build a basic web service or API on top of an Agent using FastAPI and deploy and monitor it using frameworks like LangFuse or Arize AI Phoenix.
Important Note: You may need to register for some platforms like Tavily, WeatherAPI etc for the workshop (no billing needed), we will send the instructions ahead of time. That will be essential for running the hands-on code demos live along with the instructor in the session.
Additional Points
- Prerequisites: Solid understanding of Python, NLP and Generative AI will be useful
- Content Provided: Slides, complete code notebooks, datasets
- Infrastructure: Most of the hands-on demos we will do on Google Colab for the deployment and monitoring section we will provide the cloud infrastructure (either Colab or Runpod.io).
*Note: These are tentative details and are subject to change.
Read MoreEveryone is building AI agents, but how do you design Agentic AI Systems that are truly reliable in the real-world?
Agentic AI systems can plan tasks, use tools, reflect on results, and even collaborate with other agents. But building them at scale brings challenges:
- Choosing the right agent architecture
- Handling memory & context efficiently
- Reducing latency
- Monitoring and evaluating agents effectively
This session draws from my personal experience building and deploying Agentic AI systems over the past year. We’ll focus on three pillars: Architecting, Optimizing, and Observability for Agentic AI Systems.
Agenda
1. Introduction
- What are AI Agents?
- Common challenges in building Agentic AI Systems
- AI Agents vs. AI Workflows
2. Architecting Effective Agentic AI Systems
- Popular Tools & Frameworks - key players and my recommendations
- Why LangGraph Matters - benefits for building AI agents
- Agent Design Patterns - tool use, planning, reflection, multi-agent (with practical recommendations)
- Single-Agent vs. Multi-Agent Systems - real-world hands-on example & recommendations
3. Optimizing Agentic AI Systems
- Context Engineering - what it is and popular approaches
- Agentic RAG - integrating RAG with agents
- Router Agentic RAG - real-world hands-on example
- MCP & A2A - separating hype from value
- Proven Multi-Server MCP Architecture - real-world hands-on example
- Memory Management - long-term vs. short-term
- Tools & Frameworks for Memory - key players
- Memory Context Engineering - hands-on examples for Agentic AI
4. Observability (Monitoring & Evaluation) for Agentic AI Systems
- Agent Observability - what it is and why it matters
- Observability Tools & Frameworks - key players
- Monitoring Metrics - token usage, latency, cost, tool calls, errors, etc.
- Evaluation Metrics - goal accuracy, reasoning quality, trajectory accuracy, etc.
- Hands-On Monitoring - tracing and dashboarding agent behavior
- Hands-On Evaluation - building datasets and running evaluations with metrics
Throughout the Session
- Best practices and caveats for real-world readiness
- Hands-on code demos using LangGraph, FastMCP, LangMem, and LangSmith
Get ready for a high-stakes AI face-off as three leading multi-agent frameworks - AutoGen, CrewAI, and LangGraph, go head-to-head solving the same real-world AI problem: Building a Multi-Agent Helpdesk AI Assistant.
Watch top Agentic AI practitioners demonstrate how each framework tackles this challenge: from structuring agent teams to orchestrating decisions across multiple steps. This unique session combines live hands-on demos and a panel discussion. You’ll walk away with a clear view of what each framework does best, where they struggle, and how to pick the right one for your next Agentic AI project.
Read MoreManaging and scaling ML workloads have never been a bigger challenge in the past. Data scientists are looking for collaboration, building, training, and re-iterating thousands of AI experiments. On the flip side ML engineers are looking for distributed training, artifact management, and automated deployment for high performance
Read MoreManaging and scaling ML workloads have never been a bigger challenge in the past. Data scientists are looking for collaboration, building, training, and re-iterating thousands of AI experiments. On the flip side ML engineers are looking for distributed training, artifact management, and automated deployment for high performance
Read More