From Zero to Agentic AI: Design, Build, and Deploy with LangGraph and Python

About the Workshop

In this hands-on workshop, you will build an Agentic AI application from scratch and take it all the way to production. Starting from a simple LLM-based workflow, you will progressively introduce tool usage, ReAct-style reasoning, and LangGraph orchestration to create a system capable of making decisions, retrieving information, and taking actions.

You will connect the application to external data sources using RAG, structure and control its behavior through context engineering, and extend its capabilities with MCP and multi-agent patterns. Finally, you will package everything into a Streamlit interface and deploy it to a real cloud service.

By the end of the workshop, you will have a fully working AI knowledge assistant and a clear understanding of how to design and ship production-ready agentic systems in Python.

*Note: These are tentative details and are subject to change. 

Prerequisites

  • Basic Python knowledge (functions, classes, virtual environments)

  • An OpenAI API key for LLM access

  • A Tavily API key (used for search/tool integration)

  • A Render account (free tier is sufficient) for deployment

  • Access to Google Colab and the ability to run Python locally on your machine

  • A laptop with internet connection and ability to install Python packages

Workshop Modules

  • Deterministic systems vs LLM-based systems vs agentic systems
  • Strengths and limitations of each approach
  • When agents are necessary (dynamic decision-making, tool selection, iterative reasoning)
  • When agents are overkill
  • Real-world use cases in production systems

  • How LLMs work in applications (prompt → response)
  • Limitations of plain prompting (no memory, no actions, no control flow)
  • Introduction to tool calling
  • From single-step responses to multi-step reasoning
  • Concept of an “agent” as a decision-making loop

  • Reasoning + Acting paradigm
  • Thought → Action → Observation loop
  • How agents decide which tool to use
  • Implementing a simple ReAct agent in Python
  • Failure modes (looping, hallucinated actions, tool misuse)

  • Why graphs instead of linear chains
  • Core concepts: state, nodes, edges
  • Designing workflows as state machines
  • Conditional routing and branching logic
  • Loops, retries, and termination conditions
  • Building a first LangGraph agent step by step

  • What tools are in agentic systems
  • Designing reliable tools (clear inputs/outputs, validation)
  • Integrating Python functions as tools
  • Connecting external APIs (e.g. Slack, GitHub)
  • Error handling and fallback strategies
  • Structured outputs and tool schemas

  • Why RAG is needed (grounding, reducing hallucinations)
  • Indexing internal knowledge (documents, text, code)
  • Embeddings and vector search basics
  • Retrieval pipeline: query → retrieve → rank → inject
  • Integrating RAG into an agent workflow
  • Trade-offs: latency, relevance, chunking strategies

  • What “context” means in LLM applications
  • Types of context:
    • system instructions
    • user input
    • conversation history
    • retrieved documents
    • tool outputs
  • Controlling what goes into the prompt
  • Avoiding context overload and noise
  • Passing structured state through LangGraph

  • Motivation: standardizing tool and context integration
  • How MCP abstracts external systems
  • Connecting MCP servers/tools to the agent
  • Benefits for scalability and modularity
  • Example integration in the application

  • When a single agent is not enough
  • Splitting responsibilities across agents
  • Common patterns:
    • planner / executor
    • retriever / analyzer
    • generator / reviewer
  • Coordination and communication between agents
  • Trade-offs: complexity vs clarity

  • Turning backend logic into a usable application
  • Building a chat-based UI
  • Handling user input and responses
  • Managing session state
  • Displaying structured outputs and tool results
  • Basic UX considerations for AI apps

  • Preparing the app for production
  • Managing environment variables and API keys
  • Packaging dependencies
  • Deploying to a cloud service (e.g. Render)
  • Running and testing the live application
  • Basic monitoring and limitations

Instructor

Workshop Details