Let’s talk fast, accurate AI at AWS re:Invent.

Join us in Vegas on Dec. 1-5.
Building with LangChain

How to build a semantic cache with LangChain

About

Speed is everything when building with LLMs, but so is memory. Ricardo Ferreira shows you how to make your AI app faster and smarter with a semantic cache built on Redis. Go beyond simple lookups: learn how to reuse answers based on meaning, not just matching text. See how Redis and LangChain work together to serve instant, intelligent responses powered by OpenAI.

19 minutes
Key topics
  1. Build a semantic cache with Redis and LangChain to speed up LLM responses
  2. How Redis can reuse answers by meaning, not just by exact match, to make AI apps more efficient
Speakers
Ricardo Ferreira

Ricardo Ferreira

Principal Developer Advocate

Latest content

See all
Image
Webinars
Building the future Architecting AI Agents with AWS, LlamaIndex and Redis
1 hour
Image
Building with LangChain
Building AI Apps using LangChain
2 minutes
Image
Rate limiters with Redis
Rate limiting with Redis: an essential guide
6 minutes

Get started with Redis today

Speak to a Redis expert and learn more about enterprise-grade Redis today.