Picture credit — LangChain website

Understanding LangGraph — Note-1

@Anil's Notes

--

LangChain is an open-source framework designed to facilitate the creation of AI applications based on large language models. It provides modules and tools necessary for building AI applications. LangChain has been around for a year and offers the functionality to create custom chains.

Agents:

An agent is a system driven by a language model that makes decisions about actions to take towards a goal by acting on its own without needing instructions from human. An agent runtime is responsible for keeping this system running, continuously deciding on actions (via tools), recording observations, and maintaining this cycle until the agent’s task is completed. LangChain simplifies agent customization with its expression language, allowing users to define agents and their behaviors.

“LLMs are necessarily brains of AI agents”

How are agents currently used?

  • Automate tasks that are clear, simple and well-defined goal
  • Focus on small, repetitive tasks
  • Research and summarize
  • Software engineering — agents like Devin
  • Customer service with better satisfaction scores

However the LangChain agent framework was limited by:

  • Challenge in transferring objects/data between multiple agents
  • Limited knowledge and inconsistencies in responses due to LLM hallucinations
  • Inability to perform complex tasks like reasoning and calculations (though ReAct pattern is useful to certain extend but it is very basic)
  • Lack of contextual awareness and memory without LangGraph’s support (particularly in complex runtime human back and forth interactions)

LangGraph, a module in LangChain and simplifies the process of creating and managing agents and their runtimes.

LangGraph:

LangGraph can help by providing a high-level abstraction for creating applications involving multiple actors interacting with each other and with Large Language Models (LLMs). It enables developers to define actors, their attributes, relationships, and behaviors using a graph-based representation, facilitating communication, state management, and context preservation among actors and LLMs. Additionally, LangGraph supports cyclic data flows, allowing nodes to receive feedback from previous interactions, enabling applications to remember past interactions and generate more contextually aware responses. This feature is crucial for applications like conversational agents that need to personalize responses based on user preferences and past interactions.

Nodes & Edges:

LangGraph works by organizing information into nodes and edges, which are like building blocks that help create a structure for the computer to follow.

  • Nodes: Think of nodes as individual pieces of information or actions. Each node represents a specific task or piece of data that the computer needs to work with. For example, a node could be a question that needs an answer or a method or an action or even an agent (or tool) itself .
  • Edges: Edges are like connections between nodes. They show how different pieces of information or actions are related to each other. When an edge connects two nodes, it means that there is a relationship between them, and one node might need the output of another node to complete its task.

Hello World! (LangGraph)

Lets start with a basic “Hello World” program to understand how nodes and edges work. We will create two simple python functions “greet” and “user” that we will register as nodes (that simply append text) and an edge that connects between both the functions to pass the return value between them.

!pip install langgraph

from langgraph.graph import Graph

def greet(input):
return "Hello "

def user(input):
return input + "Anil!"

#Create a graph

workflow = Graph()

#Add python functions as nodes
workflow.add_node("greet_node", greet)
workflow.add_node("user_node", user)

#Add an edge
workflow.add_edge("greet_node", "user_node")

workflow.set_entry_point("greet_node")
workflow.set_finish_point("user_node")

#app = workflow.compile()

input = 'Hi'
for output in app.stream(input):
for key, value in output.items():
print(f"Output from node '{key}':")
print("---")
print(value)
print("\n---\n")

Output:

Output from node 'greet_node':
---
Hi

---

Output from node 'user_node':
---
Hello Anil!

---

Output from node '__end__':
---
Hello Anil!

---

Lets understand the code:

  1. Installation: Install “langgraph” python package.
  2. Defining basic Python functions:

greet(input): This function takes an input and returns "Hello ".

user(input): This function appends "Anil!" to the input string.

3. Creating a Graph:

workflow = Graph(): Creates a new graph object using LangGraph.

4. Adding Nodes:

workflow.add_node("greet_node", greet): Adds the "greet_node" to the graph, associated with the greet function.

workflow.add_node("user_node", user): Adds the "user_node" to the graph, associated with the user function.

5. Connecting Nodes:

workflow.add_edge("greet_node", "user_node"): Connects the "greet_node" to the "user_node" in the graph.

6. Setting Entry and Finish Points:

workflow.set_entry_point("greet_node"): Sets the "greet_node" as the entry point of the workflow.

workflow.set_finish_point("user_node"): Sets the "user_node" as the finish point of the workflow.

7. Execution: The execution spits output to understand the flow of execution.

This is a basic Hello World! program to kickstart LangGraph.

LLM Integration

Lets make it a bit interesting with more dynamic response via LLM integration. Lets change “greet” function name to “answer” and add an LLM call to get an answer using OpenAI. Note that you have to replace the environment variable with OpenAI key/token.

from langgraph.graph import Graph
from langchain_openai import ChatOpenAI
import os

#Add the OpenAI key
os.environ["OPENAI_API_KEY"] = "REPLACE_ME_WITH_KEY"

# Set the model as ChatOpenAI
model = ChatOpenAI(temperature=0)

def answer(input):
response = model.invoke(input)
return response.content

def user(input):
return "Anil, " + input

#Create a graph

workflow = Graph()

workflow.add_node("answer_node", answer)
workflow.add_node("user_node", user)

workflow.add_edge("answer_node", "user_node")

workflow.set_entry_point("answer_node")
workflow.set_finish_point("user_node")

app = workflow.compile()

input = 'Tell me a joke'
for output in app.stream(input):
for key, value in output.items():
print(f"Output from node '{key}':")
print("---")
print(value)
print("\n---\n")
Output from node 'answer_node':
---
Why couldn't the bicycle stand up by itself?

Because it was two tired!

---

Output from node 'user_node':
---
Anil, Why couldn't the bicycle stand up by itself?

Because it was two tired!

---

Output from node '__end__':
---
Anil, Why couldn't the bicycle stand up by itself?

Because it was two tired!

---

We’ve seen how LangGraph can empower us to define basic interactive components and flow between python functions.

Stay tuned for Part 2, where we’ll dive into enhancing our basic sample into a more complex system with advanced LangGraph agentic loops. We’ll explore how to evolve our example into an interactive agent capable of maintaining state and context.

--

--

@Anil's Notes

Thoughts I add here are my personal notes and learnings only