Fall releases are live for Redis for AI, Redis Cloud, & more.

See what's new

Webinar

Real-time RAG: How to Augment LLMs with Redis and Amazon Bedrock

This talk will introduce the topic of RAG and demonstrate the benefits of using Redis Enterprise as a vector database and Amazon Bedrock.

Jan. 21, 2024

Real-time RAG: How to Augment LLMs with Redis and Amazon Bedrock

Large Language Models (LLMs), such as GPT4, leverage the power of vector embeddings and databases to address the challenges posed by evolving data. These embeddings, when combined with a vector database or search algorithm, offer a way for LLMs to gain access to an up-to-date and ever-expanding knowledge base. This ensures LLMs remain capable of generating accurate and contextually appropriate outputs, even in the face of constantly changing information. This approach is sometimes called Retrieval Augmented Generation (RAG).

Speakers

Sam Partee

Redis

Sam Partee

Principal Applied AI Engineer

Get started with Redis today

Speak to a Redis expert and learn more about enterprise-grade Redis today.