We're talking new releases & fast AI at Redis Released. Join us in your city.

Register now

Webinar

Agentic RAG: Using Semantic Caching for Speed and Cost Optimization

Use the latest developments with agents to enhance chatbots.

More and more companies are building their own virtual assistants utilizing agents and Retrieval Augmented Generation (RAG) to enhance responses from Large Language Models (LLMs). This approach allows companies to enhance virtual assistants by grounding answers in fact while minimizing security and data leakage concerns. Many companies are in the exploratory phase, and architects and devs have questions about the best ways to structure virtual assistants and the flow of data. Building these apps for production requires weighing considerations such as performance, quality, flexibility, and cost. With Redis and LlamaIndex, customers can build faster, more accurate chatbots at scale while optimizing cost.

Apr 30, 2024

Agentic RAG: Using Semantic Caching for Speed and Cost Optimization

Join this session to learn best practices for:

  • Architecting virtual assistant apps
  • Accelerating document ingestion while minimizing cost
  • Improving responses using AI agents
  • Optimizing response time and cost with semantic caching

Speakers

Tyler Hutcherson

Redis

Tyler Hutcherson

Senior Applied AI Engineer

Laurie Voss

LlamaIndex

Laurie Voss

VP of Developer Relations

Get started with Redis today

Speak to a Redis expert and learn more about enterprise-grade Redis today.