Today, we’re excited to introduce langgraph-checkpoint-redis, a new integration bringing Redis’ powerful memory capabilities to LangGraph. This collaboration gives developers the tools to build more effective AI agents with persistent memory across conversations and sessions.
LangGraph is an open-source framework for building stateful, agentic workflows with LLMs. With this Redis integration, developers can now leverage both thread-level persistence and cross-thread memory to create agents that remember context, learn from experiences, and make better decisions over time.
When building effective AI agents, memory is crucial. The most successful implementations use simple, composable patterns that manage memory effectively. Redis is perfect for this role:
The langgraph-checkpoint-redis package offers two core capabilities that map directly to memory patterns in agentic systems:
RedisSaver and AsyncRedisSaver provide thread-level persistence, allowing agents to maintain continuity across multiple interactions within the same conversation thread:
RedisStore and AsyncRedisStore enable cross-thread memory, letting agents access and store information that persists across different conversation threads:
Let’s see how easy it is to add memory to a simple chatbot using Redis persistence:
from typing import Literal from langchain_core.tools import tool from langchain_openai import ChatOpenAI from langgraph.prebuilt import create_react_agent from langgraph.checkpoint.redis import RedisSaver # Define a simple tool @tool def get_weather(city: Literal[ "nyc" , "sf" ]): """Use this to get weather information.""" if city = = "nyc" : return "It might be cloudy in nyc" elif city = = "sf" : return "It's always sunny in sf" else : raise AssertionError( "Unknown city" ) # Set up model and tools tools = [get_weather] model = ChatOpenAI(model = "gpt-4o-mini" , temperature = 0 ) # Create Redis persistence REDIS_URI = "redis://localhost:6379" with RedisSaver.from_conn_string(REDIS_URI) as checkpointer: # Initialize Redis indices (only needed once) checkpointer.setup() # Create agent with memory graph = create_react_agent(model, tools = tools, checkpointer = checkpointer) # Use the agent with a specific thread ID to maintain conversation state config = { "configurable" : { "thread_id" : "user123" }} res = graph.invoke({ "messages" : [( "human" , "what's the weather in sf" )]}, config) |
This simple setup allows the agent to remember the conversation history for “user123” across multiple interactions. The thread ID acts as a conversation identifier, and all state is stored in Redis for fast retrieval.
For more advanced use cases, you might want agents to remember information about users across different conversation threads. Here’s a simplified example using RedisStore for cross-thread memory:
import uuid from langchain_anthropic import ChatAnthropic from langchain_core.runnables import RunnableConfig from langgraph.checkpoint.redis import RedisSaver from langgraph.graph import START, MessagesState, StateGraph from langgraph.store.redis import RedisStore from langgraph.store.base import BaseStore # Set up model model = ChatAnthropic(model = "claude-3-5-sonnet-20240620" ) # Function that uses store to access and save user memories def call_model(state: MessagesState, config: RunnableConfig, * , store: BaseStore): user_id = config[ "configurable" ][ "user_id" ] namespace = ( "memories" , user_id) # Retrieve relevant memories for this user memories = store.search(namespace, query = str (state[ "messages" ][ - 1 ].content)) info = "\n" .join([d.value[ "data" ] for d in memories]) system_msg = f "You are a helpful assistant talking to the user. User info: {info}" # Store new memories if the user asks to remember something last_message = state[ "messages" ][ - 1 ] if "remember" in last_message.content.lower(): memory = "User name is Bob" store.put(namespace, str (uuid.uuid4()), { "data" : memory}) # Generate response response = model.invoke( [{ "role" : "system" , "content" : system_msg}] + state[ "messages" ] ) return { "messages" : response} # Build the graph builder = StateGraph(MessagesState) builder.add_node( "call_model" , call_model) builder.add_edge(START, "call_model" ) # Initialize Redis persistence and store REDIS_URI = "redis://localhost:6379" with RedisSaver.from_conn_string(REDIS_URI) as checkpointer: checkpointer.setup() with RedisStore.from_conn_string(REDIS_URI) as store: store.setup() # Compile graph with both checkpointer and store graph = builder. compile (checkpointer = checkpointer, store = store) |
When using this agent, we can maintain both conversation state (thread-level) and user information (cross-thread) simultaneously:
# First conversation - tell the agent to remember something config = { "configurable" : { "thread_id" : "convo1" , "user_id" : "user123" }} response = graph.invoke( { "messages" : [{ "role" : "user" , "content" : "Hi! Remember: my name is Bob" }]}, config ) # Second conversation - different thread but same user new_config = { "configurable" : { "thread_id" : "convo2" , "user_id" : "user123" }} response = graph.invoke( { "messages" : [{ "role" : "user" , "content" : "What's my name?" }]}, new_config ) # Agent will respond with "Your name is Bob" |
In their article on memory in agentic workflows, Turing Post categorizes AI memory into long-term and short-term components:
Redis excels at supporting both memory types for agentic workflows:
Under the hood, langgraph-checkpoint-redis uses several Redis features to provide efficient memory:
The implementation includes both synchronous and asynchronous APIs to support different application architectures:
# Synchronous API with RedisSaver.from_conn_string(REDIS_URI) as checkpointer: checkpointer.setup() # ... use synchronously ... # Asynchronous API async with AsyncRedisSaver.from_conn_string(REDIS_URI) as checkpointer: await checkpointer.asetup() # ... use asynchronously ... |
The package provides multiple implementations to suit different needs:
Successful agentic systems are built with simplicity and clarity, not complexity. The Redis integration for LangGraph adopts this philosophy by providing straightforward, performant memory solutions that can be composed into sophisticated agent architectures.
Whether you’re building a simple chatbot that remembers conversation context or a complex agent that maintains user profiles across multiple interactions, langgraph-checkpoint-redis gives you the building blocks to make it happen efficiently and reliably.
To start using Redis with LangGraph today:
pip install langgraph-checkpoint-redis |
Check out the GitHub repository at https://github.com/redis-developer/langgraph-redis for comprehensive documentation and examples.
By combining LangGraph’s agentic workflows with Redis’s powerful memory capabilities, you can build AI agents that feel more natural, responsive, and personalized. The agent’s ability to remember and learn from interactions makes all the difference between a mechanical tool and an assistant that truly understands your needs over time.