dot Redis LangCache and the next era of fast, accurate AI are here.

Get the details

Redis Demo Series

Building your first RAG app with RedisVL

A hands-on walkthrough to help build your first RAG pipeline using RedisVL

Large language models (LLMs) are powerful, but without access to real-time or proprietary data, they can produce inaccurate or outdated results.

That’s where retrieval-augmented generation (RAG) comes in. RAG improves the accuracy of AI responses by combining LLMs with relevant, external information. And with RedisVL, an open source Python client that makes it easy to add vector search to your GenAI apps, building your first RAG pipeline has never been easier.

We’re going live to build a complete RAG app from scratch using Redis as the vector database. 

Follow along in a Google Colab notebook as we:

  • Preprocess a real financial document (Nike’s 10-K)
  • Generate vector embeddings using sentence transformers
  • Store and query those embeddings with RedisVL
  • Retrieve relevant context and generate grounded responses using OpenAI
  • See exactly how each step maps to a real RAG architecture

By the end of the webinar, you’ll have a working GenAI app, a ready-to-fork notebook, and a clearer understanding of how Redis powers production-grade RAG pipelines.

Event Speaker

Rini Vasan B&W Headshot

Rini Vasan

Product Marketing Manager
Redis

Sign up

Dates for Building your first RAG app with RedisVL

  • May 7, 2025 at 9:00 am PT
  • May 8, 2025 at 10:00 am BST