Let’s talk fast, accurate AI at AWS re:Invent.

Join us in Vegas on Dec. 1-5.

Chunking Strategies Explained

About

Building LLM apps that actually work starts with better chunking. Ricardo Ferreira breaks down how smart text segmentation helps embeddings return answers that make sense. Learn how to structure content for context, reduce noise, and improve vector search accuracy. Whether you’re creating semantic search, RAG pipelines, or conversational agents, mastering chunking is the key to more reliable results.

9 minutes
Key topics
  1. Why chunking is critical for precise LLM responses
  2. How to balance chunk size for accuracy and context
  3. Fixed-size, content-aware, recursive, and semantic chunking strategies
  4. Practical LangChain examples to test each approach
  5. How to find the optimal chunk size for your use case
Speakers
Ricardo Ferreira

Ricardo Ferreira

Principal Developer Advocate

Latest content

See all
Image
Meet Redis LangCache: Semantic caching for AI
52 minutes
Image
Redis Released 2024 keynote: The future of fast starts here
1 hour 21 minutes
Image
What is hybrid search?
7 minutes

Get started with Redis today

Speak to a Redis expert and learn more about enterprise-grade Redis today.