{
  "id": "rust",
  "title": "Redis cache-aside with redis-rs",
  "url": "https://redis.io/docs/latest/develop/use-cases/cache-aside/rust/",
  "summary": "Implement a Redis cache-aside layer in Rust with redis-rs",
  "tags": [
    "docs",
    "develop",
    "stack",
    "oss",
    "rs",
    "rc"
  ],
  "last_updated": "2026-05-12T09:07:59-04:00",
  "children": [],
  "page_type": "content",
  "content_hash": "7f45959b016a7cb81122f181b7a8b9d8528d5d6d03a28e5f49495b99b64b4812",
  "sections": [
    {
      "id": "overview",
      "title": "Overview",
      "role": "overview",
      "text": "This guide shows you how to implement a Redis cache-aside layer in Rust with the [redis](https://crates.io/crates/redis) crate (redis-rs). It includes a small local web server built on [axum](https://github.com/tokio-rs/axum) so you can see cache hits, misses, invalidation on write, and stampede protection in action."
    },
    {
      "id": "overview",
      "title": "Overview",
      "role": "overview",
      "text": "Cache-aside is one of the most common Redis use cases for read-heavy applications. Instead of querying the primary database on every request, the application checks Redis first and only falls back to the primary on a miss. The result is written back to Redis with a TTL so the next read is served from memory.\n\nThat gives you:\n\n* Sub-millisecond reads for the hot working set\n* Bounded staleness — every entry expires within a known window\n* Reduced primary database load proportional to hit rate\n* Field-level updates without re-serializing the full record\n* Protection against cache stampedes when popular keys expire under load\n\nIn this example, each cached product is stored as a Redis hash under a key like `cache:product:{id}`. The hash holds the product fields (`id`, `name`, `price_cents`, `stock`) and the key has a TTL so stale data is bounded automatically."
    },
    {
      "id": "how-it-works",
      "title": "How it works",
      "role": "content",
      "text": "The flow on every read looks like this:\n\n1. The application calls `cache.get(id, |key| async { primary.read(&key).await })`\n2. The helper runs `HGETALL` against `cache:product:{id}`\n3. On a hit, the cached hash is returned directly\n4. On a miss, the helper acquires a Lua-backed single-flight lock and awaits the loader to fetch from the primary\n5. The helper writes the result back to Redis with `HSET` plus `EXPIRE` and releases the lock\n6. Concurrent tasks that fail to acquire the lock wait briefly for the cache to populate, then return that value instead of issuing their own primary read\n\nOn a write, the application updates the primary and then deletes the cache key, so the next read repopulates from the new source value."
    },
    {
      "id": "the-cache-aside-helper",
      "title": "The cache-aside helper",
      "role": "content",
      "text": "The `RedisCache` struct wraps the cache-aside operations\n([source](cache.rs)):\n\n[code example]"
    },
    {
      "id": "data-model",
      "title": "Data model",
      "role": "content",
      "text": "Each cached product is stored in a Redis hash:\n\n[code example]\n\nThe implementation uses:\n\n* [`HGETALL`](https://redis.io/docs/latest/commands/hgetall) to read the cached record\n* [`HSET`](https://redis.io/docs/latest/commands/hset) plus [`EXPIRE`](https://redis.io/docs/latest/commands/expire) to repopulate after a miss\n* [`DEL`](https://redis.io/docs/latest/commands/del) to invalidate on writes\n* [`TTL`](https://redis.io/docs/latest/commands/ttl) to surface remaining staleness in the demo UI\n* [`EVAL`](https://redis.io/docs/latest/commands/eval) for the Lua single-flight lock that prevents stampedes\n* [`WATCH`](https://redis.io/docs/latest/commands/watch)/[`MULTI`](https://redis.io/docs/latest/commands/multi)/[`EXEC`](https://redis.io/docs/latest/commands/exec) for the conditional field update path"
    },
    {
      "id": "cache-aside-reads",
      "title": "Cache-aside reads",
      "role": "content",
      "text": "The `get()` method runs `HGETALL` on the cache key first. On a hit it returns the cached hash and increments the hit counter. On a miss, it delegates to a single-flight loader:\n\n[code example]\n\nThe returned `CacheResult` includes the measured Redis round-trip time so the demo UI can show the latency difference between a hit and a miss."
    },
    {
      "id": "stampede-protection-with-a-lua-lock",
      "title": "Stampede protection with a Lua lock",
      "role": "content",
      "text": "When a popular key expires, every concurrent reader observes the miss at the same instant. Without coordination, all of them would query the primary and overwrite the cache redundantly — a *cache stampede*.\n\nThe helper uses a tiny Lua script to acquire a short-lived lock atomically. Only the task that wins the `SET NX` becomes the primary loader; the rest poll the cache briefly and return the value the lock holder writes:\n\n[code example]\n\nA second script releases the lock only if the caller still owns it, so a lock that timed out and was re-acquired by someone else cannot be released by mistake:\n\n[code example]\n\nThe Rust side wraps both scripts in `redis::Script` and runs them on every miss:\n\n[code example]\n\nThe unique `token` per caller is what makes the release script safe — only the task that actually holds the lock can release it."
    },
    {
      "id": "invalidation-on-write",
      "title": "Invalidation on write",
      "role": "content",
      "text": "When a write hits the primary, the application invalidates the cache key. The next read pulls fresh data from the primary:\n\n[code example]\n\nThis is the simplest and safest pattern: never try to keep the cache and primary in sync directly, just delete the cache entry and let the next read repopulate it."
    },
    {
      "id": "field-level-updates",
      "title": "Field-level updates",
      "role": "content",
      "text": "Because each record is stored as a hash, the cache helper can also update a single field in place without re-serializing the full record. The update only writes if the entry is already cached, so a partial record can never appear in Redis:\n\n[code example]\n\nThis is useful for hot fields that change more often than the rest of the record (a stock counter, a view count) and would otherwise force a full reload."
    },
    {
      "id": "hit-miss-accounting",
      "title": "Hit/miss accounting",
      "role": "content",
      "text": "The helper keeps in-process counters for hits, misses, and stampedes that were suppressed by the single-flight lock. The demo UI surfaces these so you can see the cache absorbing load:\n\n[code example]\n\nIn production you would emit these as `metrics`/`prometheus` counters or push them into your observability stack rather than holding them as `AtomicU64`s in process memory."
    },
    {
      "id": "prerequisites",
      "title": "Prerequisites",
      "role": "content",
      "text": "Before running the demo, make sure that:\n\n* Redis is running and accessible. By default, the demo connects to `localhost:6379`.\n* The Rust toolchain is installed (rustup, cargo).\n* Dependencies are declared in `Cargo.toml`:\n\n[code example]\n\nIf your Redis server is running elsewhere, start the demo with `--redis-host` and `--redis-port`."
    },
    {
      "id": "running-the-demo",
      "title": "Running the demo",
      "role": "content",
      "text": "A local demo server is included to show the cache-aside layer in action\n([source](demo_server.rs)):\n\n[code example]\n\nThe demo server uses Tokio's async runtime plus axum for HTTP handling:\n\n* [`tokio`](https://crates.io/crates/tokio) for the async runtime\n* [`axum`](https://crates.io/crates/axum) for the web server\n* [`redis::aio::ConnectionManager`](https://docs.rs/redis/latest/redis/aio/struct.ConnectionManager.html) for shared multiplexed connections\n* [`redis::Script`](https://docs.rs/redis/latest/redis/struct.Script.html) for the Lua single-flight scripts\n\nIt exposes a small interactive page where you can:\n\n* Read a product through the cache and see whether it was a hit or a miss\n* Compare the measured Redis round-trip against the simulated primary read latency\n* Watch the cache TTL count down between requests\n* Update a field on the primary and see the cache invalidate automatically\n* Run a stampede test that fires many concurrent reads at a freshly-invalidated key and confirms only one of them reaches the primary\n* Reset the hit/miss counters at any time\n\nAfter starting the server, visit `http://localhost:8080`."
    },
    {
      "id": "the-mock-primary-store",
      "title": "The mock primary store",
      "role": "content",
      "text": "To make the demo self-contained, the example includes a `MockPrimaryStore` that stands in for a slow disk-backed database\n([source](primary.rs)):\n\n[code example]\n\nEvery call to `read()` sleeps for `read_latency_ms` so the difference between a cache hit and a miss is obvious in the UI. The store also tracks the total number of primary reads, which the stampede test uses to confirm that single-flight is working — for N concurrent readers against a cold key, you should see exactly one primary read.\n\nIn a real application this would be replaced by a SQLx query, an HTTP call to a downstream service, or any other slow-but-authoritative source."
    },
    {
      "id": "production-usage",
      "title": "Production usage",
      "role": "content",
      "text": "This guide uses a deliberately small local demo so you can focus on the cache-aside pattern. In production, you will usually want to harden several aspects of it."
    },
    {
      "id": "choose-a-ttl-that-matches-your-staleness-tolerance",
      "title": "Choose a TTL that matches your staleness tolerance",
      "role": "content",
      "text": "The TTL is the upper bound on how long a stale value can be served. Shorter TTLs mean lower hit rates and more primary load; longer TTLs mean higher hit rates and more stale reads between writes. Pick the value that matches your business tolerance for stale data, and combine it with explicit invalidation on writes for the cases where you cannot tolerate any staleness."
    },
    {
      "id": "invalidate-don-t-try-to-keep-the-cache-in-sync",
      "title": "Invalidate, don't try to keep the cache in sync",
      "role": "content",
      "text": "When the underlying record changes, delete the cache key rather than rewriting it. Cache-aside is robust precisely because it never assumes the cache holds the latest value — the next read always re-fetches from the primary on a miss."
    },
    {
      "id": "handle-missing-records-explicitly",
      "title": "Handle missing records explicitly",
      "role": "content",
      "text": "In this demo, a missing record returns `None` and nothing is cached. In a real system you may want to cache \"not found\" sentinels with a short TTL to absorb load from probing for non-existent IDs, while making sure the sentinel TTL is shorter than the positive cache entry so a newly-created record becomes visible quickly."
    },
    {
      "id": "tune-the-single-flight-lock-ttl",
      "title": "Tune the single-flight lock TTL",
      "role": "content",
      "text": "The lock TTL needs to be longer than the worst-case primary read latency so a slow loader does not lose the lock midway. The unique token in `RELEASE_LOCK_SCRIPT` ensures the original caller does not delete someone else's lock if its lock has expired."
    },
    {
      "id": "connectionmanager-vs-multiplexedconnection",
      "title": "`ConnectionManager` vs `MultiplexedConnection`",
      "role": "content",
      "text": "The demo uses `redis::aio::ConnectionManager`, which transparently reconnects on transient failures and is cheap to clone for use across many concurrent tokio tasks. For very high-throughput workloads you may also want to look at `bb8-redis` or `deadpool-redis` for explicit pooling."
    },
    {
      "id": "namespace-cache-keys-in-shared-redis-deployments",
      "title": "Namespace cache keys in shared Redis deployments",
      "role": "content",
      "text": "If multiple applications share a Redis deployment, prefix cache keys with the application name (`cache:billing:product:{id}`) so different services cannot clobber each other's entries."
    },
    {
      "id": "inspect-cached-entries-directly-in-redis",
      "title": "Inspect cached entries directly in Redis",
      "role": "content",
      "text": "When testing or troubleshooting, inspect the stored cache key directly to confirm the application is writing the fields and TTL you expect:\n\n[code example]"
    },
    {
      "id": "learn-more",
      "title": "Learn more",
      "role": "related",
      "text": "* [redis crate on crates.io](https://crates.io/crates/redis) - Install and use the Rust Redis client\n* [SET command](https://redis.io/docs/latest/commands/set) - Set a string with TTL options (`EX`, `PX`, `NX`)\n* [HSET command](https://redis.io/docs/latest/commands/hset) - Write hash fields\n* [HGETALL command](https://redis.io/docs/latest/commands/hgetall) - Read every field of a hash\n* [EXPIRE command](https://redis.io/docs/latest/commands/expire) - Set key expiration in seconds\n* [DEL command](https://redis.io/docs/latest/commands/del) - Delete a key on invalidation\n* [Lua scripting](https://redis.io/docs/latest/develop/programmability/eval-intro) - Atomic single-flight locks and stampede mitigation"
    }
  ],
  "examples": [
    {
      "id": "the-cache-aside-helper-ex0",
      "language": "rust",
      "code": "use redis::aio::ConnectionManager;\nuse redis::Client;\nuse std::sync::Arc;\n\nmod cache;\nmod primary;\n\nuse cache::{CacheConfig, RedisCache};\nuse primary::MockPrimaryStore;\n\n#[tokio::main]\nasync fn main() -> redis::RedisResult<()> {\n    let client = Client::open(\"redis://localhost:6379/\")?;\n    let conn = ConnectionManager::new(client).await?;\n    let primary = Arc::new(MockPrimaryStore::new(150));\n    let cache = RedisCache::new(conn, CacheConfig::default());\n\n    // Read through the cache.\n    let primary_clone = primary.clone();\n    let result = cache\n        .get(\"p-001\", |key| {\n            let p = primary_clone.clone();\n            async move { p.read(&key).await }\n        })\n        .await?;\n    println!(\"hit={} latency={:.2}ms\", result.hit, result.redis_latency_ms);\n\n    // Update a single field without rewriting the whole record.\n    cache.update_field(\"p-001\", \"stock\", \"41\").await?;\n\n    // Invalidate the cache key on a write to the primary.\n    primary.update_field(\"p-001\", \"price_cents\", \"699\");\n    cache.invalidate(\"p-001\").await?;\n    Ok(())\n}",
      "section_id": "the-cache-aside-helper"
    },
    {
      "id": "data-model-ex0",
      "language": "text",
      "code": "cache:product:p-001\n  id          = p-001\n  name        = Sourdough Loaf\n  price_cents = 650\n  stock       = 42",
      "section_id": "data-model"
    },
    {
      "id": "cache-aside-reads-ex0",
      "language": "rust",
      "code": "pub async fn get<F, Fut>(&self, id: &str, loader: F) -> RedisResult<CacheResult>\nwhere\n    F: Fn(String) -> Fut,\n    Fut: Future<Output = Option<HashMap<String, String>>>,\n{\n    let cache_key = self.cache_key(id);\n    let mut conn = self.conn.clone();\n\n    let started = Instant::now();\n    let cached: HashMap<String, String> = conn.hgetall(&cache_key).await?;\n    let redis_latency_ms = started.elapsed().as_secs_f64() * 1000.0;\n\n    if !cached.is_empty() {\n        self.stats.hits.fetch_add(1, Ordering::Relaxed);\n        return Ok(CacheResult { record: Some(cached), hit: true, redis_latency_ms });\n    }\n\n    self.stats.misses.fetch_add(1, Ordering::Relaxed);\n    let record = self.load_with_single_flight(id, &loader).await?;\n    Ok(CacheResult { record, hit: false, redis_latency_ms })\n}",
      "section_id": "cache-aside-reads"
    },
    {
      "id": "stampede-protection-with-a-lua-lock-ex0",
      "language": "lua",
      "code": "-- Acquire a short-lived lock with SET NX PX. Returns 1 on acquire, 0 otherwise.\nif redis.call('SET', KEYS[1], ARGV[1], 'NX', 'PX', ARGV[2]) then\n    return 1\nend\nreturn 0",
      "section_id": "stampede-protection-with-a-lua-lock"
    },
    {
      "id": "stampede-protection-with-a-lua-lock-ex1",
      "language": "lua",
      "code": "if redis.call('GET', KEYS[1]) == ARGV[1] then\n    return redis.call('DEL', KEYS[1])\nend\nreturn 0",
      "section_id": "stampede-protection-with-a-lua-lock"
    },
    {
      "id": "stampede-protection-with-a-lua-lock-ex2",
      "language": "rust",
      "code": "async fn load_with_single_flight<F, Fut>(\n    &self,\n    id: &str,\n    loader: &F,\n) -> RedisResult<Option<HashMap<String, String>>>\nwhere\n    F: Fn(String) -> Fut,\n    Fut: Future<Output = Option<HashMap<String, String>>>,\n{\n    let cache_key = self.cache_key(id);\n    let lock_key = self.lock_key(id);\n    let token = random_token();\n\n    let mut conn = self.conn.clone();\n    let acquired: i64 = self\n        .acquire_script\n        .key(&lock_key)\n        .arg(token.as_str())\n        .arg(self.cfg.lock_ttl_ms.to_string())\n        .invoke_async(&mut conn)\n        .await?;\n\n    if acquired == 1 {\n        let result = self.populate_after_lock(id, loader, &cache_key).await;\n        let _ = self\n            .release_script\n            .key(&lock_key)\n            .arg(token.as_str())\n            .invoke_async::<_, i64>(&mut conn)\n            .await;\n        return result;\n    }\n\n    self.stats.stampedes_suppressed.fetch_add(1, Ordering::Relaxed);\n    let deadline = Instant::now() + Duration::from_millis(self.cfg.lock_ttl_ms);\n    while Instant::now() < deadline {\n        tokio::time::sleep(Duration::from_millis(self.cfg.wait_poll_ms)).await;\n        let cached: HashMap<String, String> = conn.hgetall(&cache_key).await?;\n        if !cached.is_empty() {\n            return Ok(Some(cached));\n        }\n    }\n    Ok(loader(id.to_string()).await)\n}",
      "section_id": "stampede-protection-with-a-lua-lock"
    },
    {
      "id": "invalidation-on-write-ex0",
      "language": "rust",
      "code": "pub async fn invalidate(&self, id: &str) -> RedisResult<bool> {\n    let mut conn = self.conn.clone();\n    let n: i64 = conn.del(self.cache_key(id)).await?;\n    Ok(n == 1)\n}",
      "section_id": "invalidation-on-write"
    },
    {
      "id": "field-level-updates-ex0",
      "language": "rust",
      "code": "pub async fn update_field(&self, id: &str, field: &str, value: &str) -> RedisResult<bool> {\n    let cache_key = self.cache_key(id);\n    let mut conn = self.conn.clone();\n    loop {\n        redis::cmd(\"WATCH\").arg(&cache_key).query_async::<_, ()>(&mut conn).await?;\n        let exists: i64 = conn.exists(&cache_key).await?;\n        if exists == 0 {\n            redis::cmd(\"UNWATCH\").query_async::<_, ()>(&mut conn).await?;\n            return Ok(false);\n        }\n        let result: Option<((),)> = redis::pipe()\n            .atomic()\n            .hset(&cache_key, field, value)\n            .ignore()\n            .expire(&cache_key, self.cfg.ttl as i64)\n            .query_async(&mut conn)\n            .await?;\n        if result.is_some() {\n            return Ok(true);\n        }\n        // EXEC returned nil — WATCH detected a change. Retry.\n    }\n}",
      "section_id": "field-level-updates"
    },
    {
      "id": "hit-miss-accounting-ex0",
      "language": "rust",
      "code": "pub fn stats(&self) -> serde_json::Value {\n    let hits = self.stats.hits.load(Ordering::Relaxed);\n    let misses = self.stats.misses.load(Ordering::Relaxed);\n    let stampedes = self.stats.stampedes_suppressed.load(Ordering::Relaxed);\n    let total = hits + misses;\n    let hit_rate_pct = if total == 0 {\n        0.0\n    } else {\n        ((1000 * hits / total) as f64) / 10.0\n    };\n    serde_json::json!({\n        \"hits\": hits,\n        \"misses\": misses,\n        \"stampedes_suppressed\": stampedes,\n        \"hit_rate_pct\": hit_rate_pct,\n    })\n}",
      "section_id": "hit-miss-accounting"
    },
    {
      "id": "prerequisites-ex0",
      "language": "toml",
      "code": "[dependencies]\nredis = { version = \"0.24\", features = [\"tokio-comp\", \"aio\", \"connection-manager\"] }\ntokio = { version = \"1\", features = [\"full\"] }\naxum = \"0.7\"\nserde = { version = \"1.0\", features = [\"derive\"] }\nserde_json = \"1.0\"\nrand = \"0.8\"",
      "section_id": "prerequisites"
    },
    {
      "id": "running-the-demo-ex0",
      "language": "bash",
      "code": "cargo run --release -- --port 8080 --redis-host localhost --redis-port 6379",
      "section_id": "running-the-demo"
    },
    {
      "id": "the-mock-primary-store-ex0",
      "language": "rust",
      "code": "pub async fn read(&self, id: &str) -> Option<HashMap<String, String>> {\n    tokio::time::sleep(Duration::from_millis(self.read_latency_ms)).await;\n    self.reads.fetch_add(1, Ordering::Relaxed);\n    let map = self.records.lock().unwrap();\n    map.get(id).cloned()\n}",
      "section_id": "the-mock-primary-store"
    },
    {
      "id": "inspect-cached-entries-directly-in-redis-ex0",
      "language": "bash",
      "code": "redis-cli HGETALL cache:product:p-001\nredis-cli TTL cache:product:p-001",
      "section_id": "inspect-cached-entries-directly-in-redis"
    }
  ]
}
