{
  "id": "java-lettuce",
  "title": "Redis prefetch cache with Lettuce",
  "url": "https://redis.io/docs/latest/develop/use-cases/prefetch-cache/java-lettuce/",
  "summary": "Implement a Redis prefetch cache in Java with Lettuce",
  "tags": [
    "docs",
    "develop",
    "stack",
    "oss",
    "rs",
    "rc"
  ],
  "last_updated": "2026-05-14T08:58:05-05:00",
  "children": [],
  "page_type": "content",
  "content_hash": "e515a96dc62c9a17083e75fe015447351563f9bdde7d62418f2c1dcd85fd87fc",
  "sections": [
    {
      "id": "overview",
      "title": "Overview",
      "role": "overview",
      "text": "This guide shows you how to implement a Redis prefetch cache in Java with the [Lettuce](https://redis.io/docs/latest/develop/clients/lettuce) client library. It includes a small local web server built on the JDK's `com.sun.net.httpserver` so you can watch the cache pre-load at startup, see a background sync worker apply primary mutations within milliseconds, and break the cache to confirm that reads never fall back to the primary."
    },
    {
      "id": "overview",
      "title": "Overview",
      "role": "overview",
      "text": "Prefetch caching pre-loads a working set of reference data into Redis before the first request arrives, so every read on the request path is a cache hit. A separate sync worker keeps the cache current as the source of truth changes — there is no fall-back to the primary on the read path.\n\nThat gives you:\n\n* Near-100% cache hit ratios for reference and master data\n* Sub-millisecond reads for lookup-heavy paths at peak traffic\n* All reference-data reads offloaded from the primary database\n* Source-database changes propagated into Redis within a few milliseconds\n* A long safety-net TTL that bounds memory if the sync pipeline ever stops\n\nIn this example, each cached category is stored as a Redis hash under a key like `cache:category:{id}`. The hash holds the category fields (`id`, `name`, `display_order`, `featured`, `parent_id`) and the key has a long safety-net TTL that the sync worker refreshes on every add or update event. Delete events remove the cache key outright, so there is no TTL to refresh in that case.\n\nThis guide uses Lettuce's synchronous command API (`StatefulRedisConnection.sync()`) for reads and event application, and the asynchronous API (`async()`) inside `bulkLoad` so that the startup pipeline of `DEL` + `HSET` + `EXPIRE` triples can batch into a single round trip without each command blocking on its own future. Lettuce's reactive API would work equally well for either path."
    },
    {
      "id": "how-it-works",
      "title": "How it works",
      "role": "content",
      "text": "The flow has three independent paths:\n\n1. **On startup**, the demo server calls `cache.bulkLoad(primary.listRecords())`, which pipelines `DEL` + `HSET` + `EXPIRE` for every record in one round trip.\n2. **On every read**, the application calls `cache.get(entityId)`, which runs `HGETALL` against Redis only. A miss is treated as an error, not a trigger to query the primary.\n3. **On every primary mutation**, the primary appends a change event to an in-process queue. The sync worker thread drains the queue and calls `cache.applyChange(event)`. For an `upsert`, the helper rewrites the cache hash and refreshes the safety-net TTL; for a `delete`, it removes the cache key.\n\nIn a real system the in-process change queue is replaced by a CDC pipeline — [Redis Data Integration](https://redis.io/docs/latest/integrate/redis-data-integration), Debezium plus a lightweight consumer, or an equivalent tool that tails the source's binlog/WAL and pushes events into Redis."
    },
    {
      "id": "the-prefetch-cache-helper",
      "title": "The prefetch-cache helper",
      "role": "content",
      "text": "The `PrefetchCache` class wraps the cache operations\n([source](https://github.com/redis/docs/blob/main/content/develop/use-cases/prefetch-cache/java-lettuce/PrefetchCache.java)):\n\n[code example]"
    },
    {
      "id": "data-model",
      "title": "Data model",
      "role": "content",
      "text": "Each cached category is stored in a Redis hash:\n\n[code example]\n\nThe implementation uses:\n\n* [`HSET`](https://redis.io/docs/latest/commands/hset) + [`EXPIRE`](https://redis.io/docs/latest/commands/expire), pipelined, for the bulk load and every sync event\n* [`HGETALL`](https://redis.io/docs/latest/commands/hgetall) on the read path\n* [`DEL`](https://redis.io/docs/latest/commands/del) for sync-delete events and explicit invalidation\n* [`SCAN`](https://redis.io/docs/latest/commands/scan) to enumerate the cached keyspace and to clear the prefix\n* [`TTL`](https://redis.io/docs/latest/commands/ttl) to surface remaining safety-net time in the demo UI\n* [`MULTI`](https://redis.io/docs/latest/commands/multi)/[`EXEC`](https://redis.io/docs/latest/commands/exec) for the transactional upsert path in `applyChange`"
    },
    {
      "id": "bulk-load-on-startup",
      "title": "Bulk load on startup",
      "role": "content",
      "text": "The `bulkLoad` method pipelines a `DEL` + `HSET` + `EXPIRE` triple for every record using Lettuce's async API with `setAutoFlushCommands(false)`. The whole batch flushes in a single network round trip, so loading thousands of records takes one RTT plus the time Redis spends executing the commands locally — typically tens of milliseconds even for a large reference table:\n\n[code example]\n\nThe bulk load is intentionally non-transactional: nothing is reading the cache yet on the startup path, the records do not need to be applied atomically as a set, and skipping `MULTI`/`EXEC` keeps the pipeline fast. The same method is used for the live `/reprefetch` reload, which is safe because the demo pauses the sync worker around the clear-and-reload sequence — see [Re-prefetch under load](#re-prefetch-under-load) below. If you call `bulkLoad` directly from your own code on a cache that is already serving reads, either pause your writers first or rewrite it as a single `MULTI`/`EXEC` block so callers cannot observe a half-loaded record.\n\nUsing the async API here is important: the sync API blocks on every command's future, which would defeat the batching even with auto-flush disabled. The async API queues commands locally and only flushes them when `flushCommands()` is called, then waits on the resulting futures in bulk."
    },
    {
      "id": "reads-from-redis-only",
      "title": "Reads from Redis only",
      "role": "content",
      "text": "The `get` method runs `HGETALL` and returns the cached hash. **It does not fall back to the primary on a miss.** In a healthy system, a miss never happens; if it does, the application surfaces it as an error and treats it as a sync-pipeline incident:\n\n[code example]\n\nThis is the key behavioural difference from [cache-aside](https://redis.io/docs/latest/develop/use-cases/cache-aside): the request path never touches the primary, so reference-data reads cannot contribute to primary database load."
    },
    {
      "id": "applying-sync-events",
      "title": "Applying sync events",
      "role": "content",
      "text": "The sync worker calls `applyChange` for every primary mutation. For an `upsert`, the helper rewrites the cache hash and refreshes the safety-net TTL in one `MULTI`/`EXEC` block so the cache never holds a stale mix of old and new fields. For a `delete`, it removes the cache key:\n\n[code example]\n\nThe `DEL` before the `HSET` ensures the cached hash contains exactly the fields the primary record has now — fields that have been dropped from the primary will not linger in Redis.\n\nA Lettuce-specific point: a single `StatefulRedisConnection` is thread-safe for individual command calls, but `MULTI`/`EXEC` is connection-scoped state. If two threads issued transactions over the same connection at the same time, their queued commands would interleave. The demo shares one connection across HTTP handlers and the sync worker, so `txLock` (a `ReentrantLock`) serializes every transactional sequence. In production you would hand each transactional caller its own connection from a pool (see [Production usage](#production-usage)) or migrate the upsert path into a Lua script so the atomicity is server-side and no client-side lock is needed."
    },
    {
      "id": "the-sync-worker",
      "title": "The sync worker",
      "role": "content",
      "text": "The `SyncWorker` runs a daemon thread that blocks on the primary's change queue with a short timeout. Every change is applied to Redis as soon as it arrives\n([source](https://github.com/redis/docs/blob/main/content/develop/use-cases/prefetch-cache/java-lettuce/SyncWorker.java)):\n\n[code example]\n\nIn production this loop is replaced by a CDC consumer reading from RDI's Redis output stream, Debezium's Kafka topic, or an equivalent change feed. The shape stays the same: drain events, apply them to Redis, advance the consumer offset."
    },
    {
      "id": "invalidation-and-re-prefetch",
      "title": "Invalidation and re-prefetch",
      "role": "content",
      "text": "Two helpers exist for testing and recovery:\n\n* `invalidate(entityId)` deletes a single cache key. The demo uses it to simulate a sync-pipeline failure on one record.\n* `clear()` runs `SCAN MATCH cache:category:*` and deletes every key under the prefix. The demo uses it to simulate a full cache loss.\n\nIn both cases, the recovery path is to call `bulkLoad(primary.listRecords())` again — re-prefetching from the primary. The demo exposes this as the \"Re-prefetch\" button so you can see the cache come back to a fully-warm state in one operation."
    },
    {
      "id": "re-prefetch-under-load",
      "title": "Re-prefetch under load",
      "role": "content",
      "text": "`clear()` and `bulkLoad()` are not atomic against the sync worker. If a change event arrives between the snapshot (`primary.listRecords()`) and the bulk write, the bulk write can overwrite a newer value; if a change event arrives between `clear()`'s `SCAN` and `DEL`, the cleared entry can immediately be recreated. The demo's `/clear` and `/reprefetch` handlers solve this by pausing the sync worker around the operation:\n\n[code example]\n\n`pause()` waits for the worker to finish whatever event it is currently applying, parks the run loop, and returns. Change events that arrive during the pause sit in the primary's queue and apply in order once `resume()` is called, so no event is lost."
    },
    {
      "id": "hit-miss-accounting",
      "title": "Hit/miss accounting",
      "role": "content",
      "text": "The helper keeps in-process counters for hits, misses, prefetched records, sync events applied, and the average lag between a primary change and its application to Redis. The demo UI surfaces these so you can confirm the cache is absorbing all reads and the sync worker is keeping up:\n\n[code example]\n\nIn production you would emit these as Micrometer counters and gauges or push them into your metrics pipeline. The sync-lag metric is the most important: a sudden rise indicates the CDC pipeline is falling behind."
    },
    {
      "id": "prerequisites",
      "title": "Prerequisites",
      "role": "content",
      "text": "Before running the demo, make sure that:\n\n* Redis is running and accessible. By default, the demo connects to `localhost:6379`.\n* JDK 17 or later is installed (the demo uses Java text blocks for the inline HTML).\n* The Lettuce JAR (and its Netty + Reactor dependencies) is on your classpath.\n  Get them from\n  [Maven Central](https://repo1.maven.org/maven2/io/lettuce/lettuce-core/),\n  or via Maven/Gradle in a project setup.\n\nIf your Redis server is running elsewhere, start the demo with `--redis-host` and `--redis-port`."
    },
    {
      "id": "running-the-demo",
      "title": "Running the demo",
      "role": "content",
      "text": ""
    },
    {
      "id": "get-the-source-files",
      "title": "Get the source files",
      "role": "content",
      "text": "The demo consists of four Java files. Download them from the [`java-lettuce` source folder](https://github.com/redis/docs/tree/main/content/develop/use-cases/prefetch-cache/java-lettuce) on GitHub, or grab them with `curl`:\n\n[code example]\n\nYou also need Lettuce and its runtime dependencies on your classpath. The simplest way is to download them into a local `lib/` directory:\n\n[code example]"
    },
    {
      "id": "start-the-demo-server",
      "title": "Start the demo server",
      "role": "content",
      "text": "From the demo directory:\n\n[code example]\n\n(Where `lib/` contains `lettuce-core`, `reactor-core`, `reactive-streams`, and the relevant Netty jars.)\n\nYou should see something like:\n\n[code example]\n\nAfter starting the server, visit `http://localhost:8786`.\n\nThe demo server uses only standard JDK libraries for HTTP handling and concurrency:\n\n* [`com.sun.net.httpserver.HttpServer`](https://docs.oracle.com/en/java/javase/21/docs/api/jdk.httpserver/com/sun/net/httpserver/HttpServer.html) for the web server\n* [`java.util.concurrent.Executors`](https://docs.oracle.com/en/java/javase/21/docs/api/java.base/java/util/concurrent/Executors.html) for the request thread pool and sync-worker daemon\n* [`java.util.concurrent.LinkedBlockingQueue`](https://docs.oracle.com/en/java/javase/21/docs/api/java.base/java/util/concurrent/LinkedBlockingQueue.html) for the primary's in-process change feed\n\nIt exposes a small interactive page where you can:\n\n* See which IDs are in the cache and in the primary, side by side\n* Read a category through the cache and confirm every read is a hit\n* Update a field on the primary and watch the sync worker rewrite the cache hash\n* Add and delete categories and watch them appear and disappear from the cache\n* Invalidate one key or clear the entire cache to simulate a sync-pipeline failure\n* Re-prefetch from the primary to recover from a broken cache state\n* Watch the average sync lag, and confirm primary reads stay at one until you re-prefetch — each `/reprefetch` adds another primary read for the snapshot, but normal request traffic never reaches the primary at all"
    },
    {
      "id": "the-mock-primary-store",
      "title": "The mock primary store",
      "role": "content",
      "text": "To make the demo self-contained, the example includes a `MockPrimaryStore` that stands in for a source-of-truth database\n([source](https://github.com/redis/docs/blob/main/content/develop/use-cases/prefetch-cache/java-lettuce/MockPrimaryStore.java)):\n\n[code example]\n\nEvery mutation appends a change event to an in-process [`LinkedBlockingQueue`](https://docs.oracle.com/en/java/javase/21/docs/api/java.base/java/util/concurrent/LinkedBlockingQueue.html). The sync worker drains the queue with a 50 ms timeout and applies each event to Redis. The mutation lock is held across both the record update and the queue `offer`, so concurrent updates produce change events in the same order as their mutations — a correctness requirement the demo's pause/resume race test relies on. In a real system the queue is replaced by a CDC pipeline — RDI on Redis Enterprise or Debezium with a Redis consumer on open-source Redis."
    },
    {
      "id": "production-usage",
      "title": "Production usage",
      "role": "content",
      "text": "This guide uses a deliberately small local demo so you can focus on the prefetch-cache pattern. In production, you will usually want to harden several aspects of it."
    },
    {
      "id": "use-a-connection-pool-for-transactions",
      "title": "Use a connection pool for transactions",
      "role": "content",
      "text": "The demo shares a single `StatefulRedisConnection` across HTTP handlers and the sync worker, and serializes every `MULTI`/`EXEC` block with an in-process `ReentrantLock`. In production, use [`ConnectionPoolSupport`](https://github.com/redis/lettuce/wiki/Connection-Pooling) so each transactional caller (or each sync-worker partition, if you shard the change feed) gets its own connection. Once each transaction has a dedicated connection, you can drop `txLock` entirely. An alternative is to merge the `DEL`+`HSET`+`EXPIRE` upsert into a small Lua script invoked with `EVAL` — atomic server-side, lock-free on the client, and a single network round trip per event."
    },
    {
      "id": "replace-the-in-process-change-queue-with-a-real-cdc-pipeline",
      "title": "Replace the in-process change queue with a real CDC pipeline",
      "role": "content",
      "text": "The demo's in-process queue is the simplest possible stand-in for a CDC change feed. In production, the change feed lives outside the application process: an RDI pipeline configured against your primary database, Debezium connectors writing to Kafka or a Redis stream, or your application explicitly publishing change events from the write path. Whatever you choose, the consumer side stays the same — read events, apply them to Redis, advance the offset."
    },
    {
      "id": "use-a-long-safety-net-ttl-not-a-freshness-ttl",
      "title": "Use a long safety-net TTL, not a freshness TTL",
      "role": "content",
      "text": "The TTL on each cache key is a **safety net**: it bounds memory if the sync pipeline silently stops, so a stuck consumer cannot leave stale data in Redis indefinitely. The TTL is not the freshness mechanism — freshness comes from the sync worker, which refreshes the TTL on every add or update event (delete events remove the key). Pick a TTL that is comfortably longer than your worst-case sync lag plus your alerting window, so a transient sync hiccup never expires hot keys."
    },
    {
      "id": "decide-what-to-do-on-a-cache-miss",
      "title": "Decide what to do on a cache miss",
      "role": "content",
      "text": "A prefetch cache treats a miss as an error or a missing record. The two reasonable strategies are:\n\n* **Return a 404 to the user.** Appropriate when the cache is authoritative for the lookup — for example, when the user is asking for a category by ID and the ID is not in the cache.\n* **Page on-call.** A sustained miss rate on IDs you know exist is an incident: either the prefetch did not run, or the sync pipeline is broken.\n\nWhichever you choose, do not fall back to the primary on the read path — that is what cache-aside is for, and conflating the two patterns breaks the load-isolation guarantee that prefetch provides."
    },
    {
      "id": "bound-the-working-set-to-what-fits-in-memory",
      "title": "Bound the working set to what fits in memory",
      "role": "content",
      "text": "Prefetch only works if the entire dataset fits in Redis memory with headroom. Estimate the size of your reference data, multiply by a growth factor, and confirm the result fits within your Redis instance's `maxmemory` minus what other use cases need. If the working set grows beyond what Redis can hold, switch the dataset to a cache-aside pattern instead — the request path will pay miss latency, but you will not OOM."
    },
    {
      "id": "reconcile-periodically-against-the-primary",
      "title": "Reconcile periodically against the primary",
      "role": "content",
      "text": "CDC pipelines are eventually consistent: an event can be lost (broker outage, consumer crash, configuration drift) and the cache can silently diverge from the source. Run a periodic reconciliation job that re-reads all primary records, compares them against the cache, and either re-prefetches or fixes individual entries. Even running it once a day catches drift that ad-hoc inspection would miss."
    },
    {
      "id": "consider-the-async-or-reactive-apis",
      "title": "Consider the async or reactive APIs",
      "role": "content",
      "text": "For high-throughput or event-driven applications, Lettuce's `async()` (`CompletionStage`-based) or `reactive()` (Project Reactor) APIs let request-handling threads return immediately while Redis work continues. The prefetch-cache structure is identical — replace the synchronous `hgetall` / `multi`/`exec` calls with their async counterparts and chain them together. The bulk-load path in this helper already uses the async API to batch its pipeline."
    },
    {
      "id": "namespace-cache-keys-in-shared-redis-deployments",
      "title": "Namespace cache keys in shared Redis deployments",
      "role": "content",
      "text": "If multiple applications share a Redis deployment, prefix cache keys with the application name (`cache:billing:category:{id}`) so different services cannot clobber each other's entries. The helper takes a `prefix` argument exactly for this."
    },
    {
      "id": "inspect-cached-entries-directly-in-redis",
      "title": "Inspect cached entries directly in Redis",
      "role": "content",
      "text": "When testing or troubleshooting, inspect the stored cache keys directly to confirm the bulk load and the sync worker are writing what you expect:\n\n[code example]\n\nIf a key is missing for an ID that still exists in the primary, the prefetch did not run, the key expired without a sync refresh, or someone invalidated it. If a key is still present for an ID that was deleted in the primary, the delete event has not yet been applied. If the TTL is much lower than the configured safety-net value on a hot key, the sync worker is not keeping up."
    },
    {
      "id": "learn-more",
      "title": "Learn more",
      "role": "related",
      "text": "* [Lettuce guide](https://redis.io/docs/latest/develop/clients/lettuce) - Install and use the Lettuce Redis client\n* [HSET command](https://redis.io/docs/latest/commands/hset) - Write hash fields\n* [HGETALL command](https://redis.io/docs/latest/commands/hgetall) - Read every field of a hash\n* [EXPIRE command](https://redis.io/docs/latest/commands/expire) - Set key expiration in seconds\n* [DEL command](https://redis.io/docs/latest/commands/del) - Delete a key on invalidation or sync-delete\n* [SCAN command](https://redis.io/docs/latest/commands/scan) - Iterate the cached keyspace without blocking the server\n* [TTL command](https://redis.io/docs/latest/commands/ttl) - Inspect remaining safety-net time on a key\n* [MULTI command](https://redis.io/docs/latest/commands/multi) / [EXEC command](https://redis.io/docs/latest/commands/exec) - Transactional upsert path in `applyChange`\n* [Redis Data Integration](https://redis.io/docs/latest/integrate/redis-data-integration) - Configuration-driven CDC into Redis on Redis Enterprise and Redis Cloud"
    }
  ],
  "examples": [
    {
      "id": "the-prefetch-cache-helper-ex0",
      "language": "java",
      "code": "import io.lettuce.core.RedisClient;\nimport io.lettuce.core.RedisURI;\nimport io.lettuce.core.api.StatefulRedisConnection;\n\nRedisClient client = RedisClient.create(\n    RedisURI.builder().withHost(\"localhost\").withPort(6379).build());\nStatefulRedisConnection<String, String> connection = client.connect();\n\nMockPrimaryStore primary = new MockPrimaryStore(80);\nPrefetchCache cache = new PrefetchCache(connection, \"cache:category:\", 3600);\n\n// Pre-load every primary record into Redis in one pipelined round trip.\ncache.bulkLoad(primary.listRecords());\n\n// Start the sync worker so primary mutations propagate into Redis.\nSyncWorker sync = new SyncWorker(primary, cache);\nsync.start();\n\n// Read paths now go to Redis only.\nPrefetchCache.Result result = cache.get(\"cat-001\");",
      "section_id": "the-prefetch-cache-helper"
    },
    {
      "id": "data-model-ex0",
      "language": "text",
      "code": "cache:category:cat-001\n  id            = cat-001\n  name          = Beverages\n  display_order = 1\n  featured      = true\n  parent_id     =",
      "section_id": "data-model"
    },
    {
      "id": "bulk-load-on-startup-ex0",
      "language": "java",
      "code": "public int bulkLoad(Iterable<Map<String, String>> records) {\n    RedisAsyncCommands<String, String> async = connection.async();\n    connection.setAutoFlushCommands(false);\n    List<RedisFuture<?>> futures = new ArrayList<>();\n    int loaded = 0;\n    try {\n        for (Map<String, String> record : records) {\n            if (record == null) continue;\n            String entityId = record.get(\"id\");\n            if (entityId == null || entityId.isEmpty()) continue;\n            String cacheKey = cacheKey(entityId);\n            futures.add(async.del(cacheKey));\n            futures.add(async.hset(cacheKey, record));\n            futures.add(async.expire(cacheKey, ttlSeconds));\n            loaded += 1;\n        }\n        connection.flushCommands();\n        for (RedisFuture<?> future : futures) {\n            future.get();\n        }\n    } finally {\n        connection.setAutoFlushCommands(true);\n    }\n    if (loaded > 0) prefetched.addAndGet(loaded);\n    return loaded;\n}",
      "section_id": "bulk-load-on-startup"
    },
    {
      "id": "reads-from-redis-only-ex0",
      "language": "java",
      "code": "public Result get(String entityId) {\n    RedisCommands<String, String> sync = connection.sync();\n    String cacheKey = cacheKey(entityId);\n\n    long startedNs = System.nanoTime();\n    Map<String, String> cached = sync.hgetall(cacheKey);\n    double redisLatencyMs = (System.nanoTime() - startedNs) / 1_000_000.0;\n\n    if (cached != null && !cached.isEmpty()) {\n        hits.incrementAndGet();\n        return new Result(cached, true, redisLatencyMs);\n    }\n    misses.incrementAndGet();\n    return new Result(null, false, redisLatencyMs);\n}",
      "section_id": "reads-from-redis-only"
    },
    {
      "id": "applying-sync-events-ex0",
      "language": "java",
      "code": "public void applyChange(Map<String, Object> change) {\n    // ... validate op and id ...\n    if (\"upsert\".equals(op)) {\n        Map<String, String> fields = (Map<String, String>) change.get(\"fields\");\n        if (fields == null || fields.isEmpty()) return;\n        txLock.lock();\n        try {\n            sync.multi();\n            sync.del(cacheKey);\n            sync.hset(cacheKey, fields);\n            sync.expire(cacheKey, ttlSeconds);\n            sync.exec();\n        } finally {\n            txLock.unlock();\n        }\n    } else if (\"delete\".equals(op)) {\n        sync.del(cacheKey);\n    }\n    // ... record sync_events_applied counter and lag sample ...\n}",
      "section_id": "applying-sync-events"
    },
    {
      "id": "the-sync-worker-ex0",
      "language": "java",
      "code": "private void run() {\n    while (!stopRequested) {\n        if (pauseRequested) {\n            // park until resume() ...\n            continue;\n        }\n        Map<String, Object> change = primary.nextChange(pollTimeoutMs);\n        if (change == null) continue;\n        try {\n            cache.applyChange(change);\n        } catch (Exception exc) {\n            System.err.printf(\"[sync] failed to apply %s: %s%n\",\n                    change, exc.getMessage());\n        }\n    }\n}",
      "section_id": "the-sync-worker"
    },
    {
      "id": "re-prefetch-under-load-ex0",
      "language": "java",
      "code": "sync.pause(2000);\ntry {\n    cache.clear();\n    cache.bulkLoad(primary.listRecords());\n} finally {\n    sync.resume();\n}",
      "section_id": "re-prefetch-under-load"
    },
    {
      "id": "hit-miss-accounting-ex0",
      "language": "java",
      "code": "public Map<String, Object> stats() {\n    long h = hits.get();\n    long m = misses.get();\n    long total = h + m;\n    double hitRate = total == 0 ? 0.0 : Math.round(1000.0 * h / total) / 10.0;\n    double avgLag;\n    synchronized (lagLock) {\n        avgLag = syncLagSamples == 0\n                ? 0.0\n                : Math.round(100.0 * syncLagMsTotal / syncLagSamples) / 100.0;\n    }\n    Map<String, Object> stats = new LinkedHashMap<>();\n    stats.put(\"hits\", h);\n    stats.put(\"misses\", m);\n    stats.put(\"hit_rate_pct\", hitRate);\n    stats.put(\"prefetched\", prefetched.get());\n    stats.put(\"sync_events_applied\", syncEventsApplied.get());\n    stats.put(\"sync_lag_ms_avg\", avgLag);\n    return stats;\n}",
      "section_id": "hit-miss-accounting"
    },
    {
      "id": "get-the-source-files-ex0",
      "language": "bash",
      "code": "mkdir prefetch-cache-demo && cd prefetch-cache-demo\nBASE=https://raw.githubusercontent.com/redis/docs/main/content/develop/use-cases/prefetch-cache/java-lettuce\ncurl -O $BASE/PrefetchCache.java\ncurl -O $BASE/MockPrimaryStore.java\ncurl -O $BASE/SyncWorker.java\ncurl -O $BASE/DemoServer.java",
      "section_id": "get-the-source-files"
    },
    {
      "id": "get-the-source-files-ex1",
      "language": "bash",
      "code": "mkdir lib && cd lib\nLETTUCE=https://repo1.maven.org/maven2/io/lettuce/lettuce-core/6.5.0.RELEASE\ncurl -O $LETTUCE/lettuce-core-6.5.0.RELEASE.jar\nNETTY=https://repo1.maven.org/maven2/io/netty\nfor ARTIFACT in netty-buffer netty-codec netty-common netty-handler \\\n                netty-resolver netty-transport netty-transport-native-unix-common; do\n  curl -O \"$NETTY/$ARTIFACT/4.1.113.Final/$ARTIFACT-4.1.113.Final.jar\"\ndone\ncurl -O https://repo1.maven.org/maven2/io/projectreactor/reactor-core/3.6.6/reactor-core-3.6.6.jar\ncurl -O https://repo1.maven.org/maven2/org/reactivestreams/reactive-streams/1.0.4/reactive-streams-1.0.4.jar\ncd ..",
      "section_id": "get-the-source-files"
    },
    {
      "id": "start-the-demo-server-ex0",
      "language": "bash",
      "code": "javac -cp 'lib/*' PrefetchCache.java MockPrimaryStore.java SyncWorker.java DemoServer.java\njava -cp '.:lib/*' DemoServer --port 8786 --redis-host localhost --redis-port 6379",
      "section_id": "start-the-demo-server"
    },
    {
      "id": "start-the-demo-server-ex1",
      "language": "text",
      "code": "Redis prefetch-cache demo server listening on http://127.0.0.1:8786\nUsing Redis at localhost:6379 with cache prefix 'cache:category:' and TTL 3600s\nPrefetched 5 records in 90.9 ms; sync worker running",
      "section_id": "start-the-demo-server"
    },
    {
      "id": "the-mock-primary-store-ex0",
      "language": "java",
      "code": "public class MockPrimaryStore {\n    public MockPrimaryStore(int readLatencyMs) { ... }\n\n    public List<Map<String, String>> listRecords() {\n        Thread.sleep(readLatencyMs);\n        // ...\n    }\n\n    public boolean updateField(String entityId, String field, String value) {\n        synchronized (lock) {\n            // ... mutate the record ...\n            emitChangeLocked(CHANGE_OP_UPSERT, entityId, copy);\n        }\n        return true;\n    }\n}",
      "section_id": "the-mock-primary-store"
    },
    {
      "id": "inspect-cached-entries-directly-in-redis-ex0",
      "language": "bash",
      "code": "redis-cli --scan --pattern 'cache:category:*'\nredis-cli HGETALL cache:category:cat-001\nredis-cli TTL cache:category:cat-001",
      "section_id": "inspect-cached-entries-directly-in-redis"
    }
  ]
}
