What is LangGraph?

Sahitya Arya Last Updated : 04 Sep, 2024
9 min read

Introduction

Artificial intelligence (AI) is a rapidly developing field. As a result, language models have advanced to a point where AI agents are able to perform complex tasks and make complex decisions. However, as these agents’ skills have grown, the infrastructure that supports them has found it difficult to keep up. Presenting LangGraph, a revolutionary library that aims to revolutionize AI agent building and runtime execution.

Overview

  • LangGraph is a library built on top of Langchain that is designed to facilitate the creation of cyclic graphs for large language model (LLM) – based AI agents.
  • It views agent Objective Points about LangGraph and workflows as cyclic graph topologies, allowing for more variable and nuanced agent behaviors than linear execution models.
  • LangGraph uses key elements such as nodes (representing functions or Langchain runnable items), edges (defining execution and data flow), and stateful graphs (managing persistent data across execution cycles).
  • The library supports multi-agent coordination, allowing each agent to have its prompt, LLM, tools, and custom code within a single graph structure.
  • Langgraph introduces a chat agent executor that represents the agent state as a list of messages, which is particularly useful for newer, chat-based models.

The Pre-LangGraph Era

The agent executor class in the Langchain framework was the main tool for building and executing AI agents before LangGraph. This class relied on a straightforward but powerful idea: it used an agent in a loop, asking it to make decisions, carry them out, and log observations. This technique had its uses, but its adaptability and customization possibilities were intrinsically restricted.

Although functional, the agent executor class limited developers’ ability to design more dynamic and flexible agent runtimes by imposing a particular pattern of tool calling and error handling. As AI agents became more sophisticated, the need for a more adaptable architecture emerged.

What is LangGraph?

In response to these constraints, LangGraph presents a novel paradigm for agent building and runtime construction. Large Language Models (LLMs) are the foundation for designing sophisticated AI agents, and LangGraph, built on top of Langchain, is intended to make the process of creating cyclic graphs easier.

LangGraph views agent workflows as cyclic graph topologies at their foundation. This method enables more variable and nuanced behaviors from agents, surpassing its predecessors’ linear execution model. Using graph theory, LangGraph provides new avenues for developing intricate, networked agent systems.

Why use LangGraph?

  • Flexibility: As AI agents evolved, developers required more control over the agent runtime to enable personalized action plans and decision-making procedures.
  • The Cyclical Nature of AI Reasoning: Many intricate LLM applications depend on cyclical execution when employing strategies like chain-of-thought reasoning. LangGraph offers a natural framework for modeling these cyclical processes.
  • Multi-Agent Systems: As multi-agent workflows became more common, there was an increasing demand for a system that could efficiently manage and coordinate several autonomous agents.

State Management: As agents became more sophisticated, tracking and updating state data as the agent was being executed became necessary. LangGraph’s stateful graph methodology satisfies this need.

How LangGraph Works?

The functionality of LangGraph is based on several essential elements:

  • Nodes: These are functions or Langchain runnable items, like the agent’s tools.
  • Edges: Paths that define the direction of execution and data flow within the agent system, connecting nodes.
  • Stateful Graphs: LangGraph allows for persistent data across execution cycles by managing and updating state objects as data flows through the nodes.

The following diagram can be used to illustrate the working:

LangGraph Working
LangGraph Working

As shown in the image, the nodes include LLM, tools, etc. which are represented by circles or rhombus which. Flow of information between various nodes is represented by arrows. 

The popular NetworkX library served as the model for the library’s interface, which makes it user-friendly for developers with prior experience with graph-based programming.

LangGraph’s approach to agent runtime differs significantly from that of its forerunners. Instead of a basic loop, it enables the construction of intricate, networked systems of nodes and edges. With this structure, developers can design more complex decision-making procedures and action sequences.

Now, let us build an agent using LangGraph to understand them better. First, we will implement tool calling, then using a pre-built agent, and then building an agent ourselves in LangGraph.

Tool Calling in LangGraph

Pre-requisites

Create an OpenAI API key to access the LLMs and Weather API key (get here) to access the weather information. Store these keys in a ‘.env’ file:

Load and Import the keys as follows:

import os

from dotenv import load_dotenv

load_dotenv('/.env')

WEATHER_API_KEY = os.environ['WEATHER_API_KEY'] 

# Import the required libraries and methods

import json

import requests

import rich

from typing import List, Literal

from IPython.display import Image, display

from langchain_community.tools.tavily_search import TavilySearchResults

from langchain_core.tools import tool

from langchain_openai import ChatOpenAI

Define Tools

We will define two tools. One is to to get weather information when the query is specific to weather and another is to search the internet when the LLM doesn’t know the answer to the given query:

@tool
def get_weather(query: str) -> list:

    """Search weatherapi to get the current weather."""

    base_url = "http://api.weatherapi.com/v1/current.json"
    complete_url = f"{base_url}?key={WEATHER_API_KEY}&q={query}"
    response = requests.get(complete_url)
    data = response.json()

    if data.get("location"):
        return data
    else:
        return "Weather Data Not Found"

@tool
def search_web(query: str) -> list:

    """Search the web for a query."""

    tavily_search = TavilySearchResults(max_results=2, search_depth='advanced', max_tokens=1000)

    results = tavily_search.invoke(query)
    return results

To make these tools available for the LLM, we can bind these tools to the LLM as follows:

gpt = ChatOpenAI(model="gpt-4o-mini", temperature=0)

tools = [search_web, get_weather]

gpt_with_tools = gpt.bind_tools(tools)

Now, let’s invoke the LLM to with a prompt to see the results:

prompt = """
         Given only the tools at your disposal, mention tool calls for the following tasks:

         Do not change the query given for any search tasks

         1. What is the current weather in Greenland today

         2. Can you tell me about Greenland and its capital

         3. Why is the sky blue?
      """

results = gpt_with_tools.invoke(prompt)

results.tool_calls

The results will be the following:

Weather information Output

As we see, when we ask about the weather, get_weather tool is called. 

The GPT model doesn’t know the who won ICC worldcup in 2024, as it is updated with information upto October 2023 only. So, when we ask about this query, it is calling search_web tool.

Pre-built Agent

LangGraph has pre-built react (reason and act) agent. Let’s see how it works:

from langgraph.prebuilt import create_react_agent

# system prompt is used to inform the tools available to when to use each

system_prompt = """Act as a helpful assistant.

             Use the tools at your disposal to perform tasks as needed.

               - get_weather: whenever user asks get the weather of a place.

               - search_web: whenever user asks for information on current events or if you don't know the answer.

             Use the tools only if you don't know the answer.

          """

# we can initialize the agent using the gpt model, tools, and system prompt.

agent = create_react_agent(model=gpt, tools=tools, state_modifier=system_prompt)

# We will discuss its working in the next section. Let’s query the agent to see the result.

def print_stream(stream):
    for s in stream:
        message = s["messages"][-1]
        if isinstance(message, tuple):
            print(message)
        else:
            message.pretty_print()

inputs = {"messages": [("user", "who won the ICC worldcup in 2024?")]}

print_stream(agent.stream(inputs, stream_mode="values"))
Pre-built Agent using LangGraph
Pre-built Agent using LangGraph

As we can see from the output, the LLM called the search_web tool for the given query, and tool found a URL and returned content back to the LLM which contains result to the query. Then, LLM returned the answer.

Build an Agent

Now we build an agent using langGraph:

# import the required methods

from langgraph.prebuilt import ToolNode

from langgraph.graph import StateGraph, MessagesState, START, END

# define a tool_node with the available tools

tools = [search_web, get_weather]

tool_node = ToolNode(tools)

# define functions to call the LLM or the tools

def call_model(state: MessagesState):
    messages = state["messages"]
    response = gpt_with_tools.invoke(messages)
    return {"messages": [response]}


def call_tools(state: MessagesState) -> Literal["tools", END]:
    messages = state["messages"]
    last_message = messages[-1]
    if last_message.tool_calls:
        return "tools"
    return END

The call_model function takes “messages” from state as input. The “messages” can include query, prompt, or content form the tools. It returns the response.

The call_tools function also takes state messages as input. If the last message contains tool calls, as we have seen in tool_calling output, then it returns “tools” node. Otherwise it ends.

Now let’s build nodes and edges:

# initialize the workflow from StateGraph

workflow = StateGraph(MessagesState)

# add a node named ‘LLM’, with call_model function. This node uses an LLM to make decisions based on the input given

workflow.add_node("LLM", call_model)

# Our workflow starts with the ‘LLM’ node

workflow.add_edge(START, "LLM")

# Add a ‘tools’ node

workflow.add_node("tools", tool_node)

# depending on the output of the LLM, it can go ‘tools’ node or end. So, we add a conditional edge from LLM to call_tools function

workflow.add_conditional_edges("LLM", call_tools)

# ‘tools’ node sends the information back to the LLM

workflow.add_edge("tools", "LLM")

Now let’s compile the workflow and display it.

agent = workflow.compile()

display(Image(agent.get_graph().draw_mermaid_png()))
Build an Agent

As shown in the image, we start with the LLM. The LLM either calls the tools or ends based on the available information to it. If it calls any tool, the tool executes and send the result back to the LLM. And the LLM again decides to call the tool or end.

Now let’s query the agent and see the result:

for chunk in agent.stream(

    {"messages": [("user", "Will it rain in Bengaluru today?")]},

    stream_mode="values",):

    chunk["messages"][-1].pretty_print()

Output:

Output of LLM Response

As we have asked about the weather, get_weather tool is called and it returned various weather related values. Based on those values, the LLM returned that it is unlikely to rain.

In this way, we can add different kinds of tools to the LLM so that we can get our queries answered even if LLM alone can’t answer. Thus LLM agents will be far more useful in many scanarios.

What LangGraph Offers?

LangGraph offers a powerful toolset for building complex AI systems. It provides a framework for creating agentic systems that can reason, make decisions, and interact with multiple data sources. Key features include:

  • Modifiable Agent Runtimes: With LangGraph, developers may create runtimes specifically suited to particular use cases and agent behaviors, overcoming the limitations of the conventional agent executor.
  • Support for Cyclic Execution: When cyclic graphs are enabled, LangGraph makes applying sophisticated reasoning methods that require several LLM iterations easier.
  • Improved State Management: Because LangGraph graphs are stateful, more intricate agent state tracking and updating can be done throughout the execution phase.
  • Multi-Agent Coordination: Within a single graph structure, each agent can have its prompt, LLM, tools, and custom code. LangGraph is an expert in building and administering these kinds of systems.
  • Flexible Tool Integration: LangGraph’s node-based structure allows agents to easily incorporate various tools and functionalities into their repertoire.
  • Better Control Flow: LangGraph’s edge-based approach provides fine-grained control over the execution flow of an agent or multi-agent system.
  • Chat-Based Agent Support: LangGraph introduces a chat agent executor, representing the agent state as a list of messages. This is particularly useful for newer chat-based models that handle function calling as part of message parameters.

Real-World Example of LangGraph

LangGraph has many different real-world applications. It makes more complex decision-making processes possible in single-agent contexts by letting actors review and improve their arguments before acting. This is especially helpful in difficult problem-solving situations where linear execution might not be sufficient.

LangGraph excels in multi-agent systems. It permits the development of complicated agent ecosystems, wherein many specialized agents can work together to accomplish intricate tasks. LangGraph controls each agent’s interactions and information sharing through its graph structure, which can be developed with specific capabilities.

For instance, a system with distinct agents for comprehending the initial query, retrieving knowledge, generating responses, and ensuring quality assurance may be developed in a customer service setting. LangGraph would oversee information flow management, enabling smooth and efficient consumer engagement among these workers.

The Future of AI Agents

Frameworks such as LangGraph are becoming increasingly important as AI develops. LangGraph is making the next generation of AI applications possible by offering a versatile and strong framework for developing and overseeing AI agents.

The capacity to design increasingly intricate, flexible, and networked agent systems makes new applications possible, from personal assistants to scientific research tools. As developers become more comfortable with LangGraph’s features, we may anticipate seeing more advanced AI agents that can do ever more complex jobs.

Conclusion

To sum up, LangGraph is a major advancement in the development of AI agents. It enables developers to push the limits of what’s possible with AI agents by eliminating the shortcomings of earlier systems and offering a flexible, graph-based framework for agent construction and execution. LangGraph is positioned to influence the direction of artificial intelligence significantly in the future.

Unlock the potential of AI with LangGraph today! Start building your advanced AI agents and elevate your projects—explore our Generative AI Pinnacle Program now!

Also read: OpenAI’s AI Agents to Automate Complex Tasks

Frequently Asked Questions

Q1. What problem does LangGraph solve?

Ans. LangGraph addresses the limitations of previous AI agent development frameworks by providing more flexibility, better state management, and support for cyclic execution and multi-agent systems.

Q2. How does LangGraph differ from the previous agent executor class in Langchain?

Ans. Unlike the previous agent executor’s linear execution model, LangGraph allows for the creation of complex, networked agent systems with more dynamic and flexible agent runtimes.

Q3. Can Langgraph handle multi-agent systems?

Ans. Yes, LangGraph excels in multi-agent systems, allowing developers to create complex agent ecosystems where multiple specialized agents can collaborate on complex tasks.

Q4. What are some practical applications of LangGraph?

Ans. LangGraph can be used in various scenarios, from enhancing single-agent decision-making processes to creating complex multi-agent systems for tasks like customer service, where different agents handle different aspects of the interaction.

Q5. Does LangGraph require knowledge of graph theory?

Ans. While LangGraph utilizes graph concepts, its interface is modeled after the popular NetworkX library, making it user-friendly for developers with prior experience in graph-based programming. However, some understanding of graph concepts would be beneficial.

I'm Sahitya Arya, a seasoned Deep Learning Engineer with one year of hands-on experience in both Deep Learning and Machine Learning. Throughout my career, I've authored more than three research papers and have gained a profound understanding of Deep Learning techniques. Additionally, I possess expertise in Large Language Models (LLMs), contributing to my comprehensive skill set in cutting-edge technologies for artificial intelligence.

Responses From Readers

Clear

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details