Let’s talk fast, accurate AI at AWS re:Invent.

Join us in Vegas on Dec. 1-5.

Meet Redis LangCache: Semantic caching for AI

About

Stop asking the same questions. Start building faster, cheaper AI apps.
Redis’ Jen Agarwal and Amit Lamba, CEO of Mangoes.ai, introduce Redis LangCache—a fully managed semantic cache that stores and reuses LLM queries to cut token costs by up to 70%, deliver responses 15X faster, and simplify scaling for chatbots, agents, and retrieval-based apps.

52 minutes
Key topics
  1. Reduce token usage by up to 70%
  2. Deliver responses 15x faster
  3. Improve user experience without added complexity
Speakers
Jen Agarwal

Jen Agarwal

Product Leader

Amit Lamba

Amit Lamba

CEO

Latest content

See all
Image
Redis Released 2024 keynote: The future of fast starts here
1 hour 21 minutes
Image
What is hybrid search?
7 minutes
Image
Understanding vector databases with real-world examples
15 minutes

Get started with Redis today

Speak to a Redis expert and learn more about enterprise-grade Redis today.