Get your features to production faster.

Try Redis Feature Form
Back

Webinar

Nordics AI series: Redis beyond the cache

May. 27, 202611:00 AM – 11:45 AM CET
Register now

AI isn’t slowing down, and the Nordics are accelerating.

Discover how to evolve your AI stack from simple caching to a real-time context engine. In this session, we’ll explore how Redis enables semantic caching and agent memory, helping teams build faster, more efficient AI applications.

We’ll focus on practical patterns and architectures used globally, and how they apply to teams building AI systems across the Nordics.

Highlights

Semantic Caching for AI Apps

  • The latency & cost problem: Why LLM calls are slow and expensive, and why traditional caching can’t capture intent.
  • Matching on meaning: How semantic caching uses vector similarity to resolve queries based on intent, not just exact strings.
  • Performance gains: Real-world results showing up to 15x faster responses and 30%+ lower API costs.
  • Live demo See semantic caching intercepting and resolving redundant AI queries in real-time.

Agent Memory Server for AI Apps

  • The stateless problem: Why every LLM conversation starts from zero and why "context window stuffing" is inefficient and costly.
  • Short-term vs. long-term memory: How dedicated memory layers give agents continuity and the ability to learn from past interactions.
  • Consistency at scale: Real-world results: Coherent agent behavior across sessions with significantly lower token consumption.
  • Live demo: Watch an agent remembering user preferences and previous decisions across multiple disconnected sessions.

Speakers

Redis

Redis

Samuel Agbede

Developer Advocate

Register now

Join the Nordics AI community and access knowledge to enhance your AI capabilities.

Get started with Redis today

Speak to a Redis expert and learn more about enterprise-grade Redis today.