Get your features to production faster.

Try Redis Feature Form
Back

In-person workshop

Reduce LLM costs with Redis

May 30, 202611:00 AM – 2:00 PM IST
One2NPune, India
Register now

Why attend?

In this hands-on workshop, you’ll build an AI support application that uses Redis to make responses faster, cheaper, and more reliable. You’ll implement semantic caching with RedisVL to reuse answers for similar questions, and add semantic routing to direct queries to the right tool or knowledge source. By the end, you’ll understand practical patterns to cut token usage, improve latency, and scale GenAI applications more efficiently using Redis.

What to expect:

  • Hands-on build: create an AI support app end-to-end
  • Implement semantic caching with RedisVL to cut LLM calls
  • Add semantic routing to send queries to the right tool/flow
  • Practical tips to reduce latency + token costs in production
  • Live guidance + Q&A with Redis experts

Register here

Get started with Redis today

Speak to a Redis expert and learn more about enterprise-grade Redis today.