{
  "id": "java-jedis",
  "title": "Redis job queue with Jedis",
  "url": "https://redis.io/docs/latest/develop/use-cases/job-queue/java-jedis/",
  "summary": "Implement a Redis job queue in Java with Jedis",
  "tags": [
    "docs",
    "develop",
    "stack",
    "oss",
    "rs",
    "rc"
  ],
  "last_updated": "2026-05-14T08:58:05-05:00",
  "children": [],
  "page_type": "content",
  "content_hash": "dd904072e222cf1142d6d873c0fafef1540be6a2c8634141461dc423e48c2cf6",
  "sections": [
    {
      "id": "overview",
      "title": "Overview",
      "role": "overview",
      "text": "This guide shows you how to implement a Redis-backed job queue in Java with [`Jedis`](https://redis.io/docs/latest/develop/clients/jedis). It includes a small local web server built with Java's built-in `HttpServer` so you can enqueue jobs, watch a pool of workers drain them, and see the reclaimer recover jobs from a simulated worker crash."
    },
    {
      "id": "overview",
      "title": "Overview",
      "role": "overview",
      "text": "A job queue lets your application offload background work — sending email, processing payments, image transcoding, ML inference, webhooks — from the request path. Producers enqueue jobs in milliseconds and return to the user; workers pull from the queue and process them on their own schedule.\n\nThat gives you:\n\n* Low-latency user-facing requests, even when downstream work is slow or bursty\n* Horizontal scale across many worker processes that share one Redis instance\n* At-least-once delivery so a worker crash doesn't lose work\n* Visibility-timeout reclaim that returns stuck jobs to the queue automatically\n* Job metadata, retry counts, and completion results in Redis hashes with TTL\n\nIn this example, each job is identified by a random hex ID and its payload, status, and result live in a Redis hash under `queue:jobs-jedis:job:{id}`. Pending IDs sit in a list, claimed IDs move atomically to a *processing* list, and completed or failed IDs land in short history lists."
    },
    {
      "id": "how-it-works",
      "title": "How it works",
      "role": "content",
      "text": "The flow looks like this:\n\n1. The application calls `queue.enqueue(payload)`\n2. The helper writes the job metadata hash and `LPUSH`es the job ID onto the pending list\n3. A worker calls `queue.claim(timeoutMs)`\n4. The helper runs `BRPOPLPUSH` to atomically move the next pending ID into the processing list and writes a per-claim `claim_token` plus `claimed_at_ms` on the hash\n5. The worker runs the job and calls `queue.complete(job, result)` or `queue.fail(job, error)`\n6. `complete` removes the job from the processing list, writes the result, and `LPUSH`es the ID onto the completed history (with `LTRIM` and an `EXPIRE` on the hash for cleanup)\n7. `fail` either retries the job (back to pending) or moves it to the failed list once retries are exhausted\n\nIf a worker dies before completing a job, the job sits in the processing list with a `claimed_at_ms` older than the visibility timeout. A periodic call to `queue.reclaimStuck()` finds those jobs and moves them back to pending so another worker can pick them up.\n\nEvery state change holds the token: a worker that has been reclaimed cannot later complete or fail a job another worker has already claimed."
    },
    {
      "id": "the-redisjobqueue-helper",
      "title": "The `RedisJobQueue` helper",
      "role": "content",
      "text": "The `RedisJobQueue` class wraps the queue operations ([source](https://github.com/redis/docs/blob/main/content/develop/use-cases/job-queue/java-jedis/RedisJobQueue.java)):\n\n[code example]\n\nJedis operations are synchronous. The helper acquires a `Jedis` connection per call using `pool.getResource()` inside a try-with-resources block, so connections are returned to the pool even on errors. The blocking `claim()` method holds its own connection for the duration of the `BRPOPLPUSH` wait, which is fine because every other call uses a different connection."
    },
    {
      "id": "data-model",
      "title": "Data model",
      "role": "content",
      "text": "Each job's state lives in a Redis hash plus a position in one of four lists:\n\n[code example]\n\nA job's hash carries:\n\n[code example]\n\nThe implementation uses:\n\n* [`LPUSH`](https://redis.io/docs/latest/commands/lpush) to add new job IDs to the pending list\n* [`BRPOPLPUSH`](https://redis.io/docs/latest/commands/brpoplpush) to atomically claim a job into the processing list\n* [`LREM`](https://redis.io/docs/latest/commands/lrem) to remove a claimed job from the processing list on complete or fail\n* [`LTRIM`](https://redis.io/docs/latest/commands/ltrim) to cap the completed and failed history lists\n* [`HSET`](https://redis.io/docs/latest/commands/hset) / [`HGETALL`](https://redis.io/docs/latest/commands/hgetall) for job metadata\n* [`EXPIRE`](https://redis.io/docs/latest/commands/expire) on completed and failed hashes for automatic cleanup\n* [`PUBLISH`](https://redis.io/docs/latest/commands/publish) on `queue:jobs-jedis:events` for completion signalling\n* [Lua scripting](https://redis.io/docs/latest/develop/programmability/eval-intro) ([`EVAL`](https://redis.io/docs/latest/commands/eval)) for the complete, fail, and reclaim flows so each runs atomically against the processing list and metadata hash"
    },
    {
      "id": "enqueueing-jobs",
      "title": "Enqueueing jobs",
      "role": "content",
      "text": "`enqueue()` writes the metadata hash and pushes the job ID onto the pending list in one pipeline:\n\n[code example]\n\nThe payload is stored as a JSON string so the queue can carry arbitrary nested structures without forcing every field into a hash."
    },
    {
      "id": "claiming-jobs-with-brpoplpush",
      "title": "Claiming jobs with BRPOPLPUSH",
      "role": "content",
      "text": "A worker blocks until a job is available, then atomically pops it from the pending list and pushes it onto the processing list. `BRPOPLPUSH` does both in a single Redis call:\n\n[code example]\n\nThe `claim_token` is the worker's proof of ownership for this attempt. Every subsequent state change (complete, fail) checks it before touching the processing list, so a worker that hung past the visibility timeout cannot interfere with the new claimant."
    },
    {
      "id": "completing-jobs",
      "title": "Completing jobs",
      "role": "content",
      "text": "`complete()` runs a Lua script via `EVAL` so the processing-list removal, the metadata write, and the history push happen atomically:\n\n[code example]\n\nThe Lua script checks the token first and returns `0` if the worker no longer owns the job (because the reclaimer moved it back to pending). The metadata hash also gets an `EXPIRE` so completed jobs are cleaned up automatically."
    },
    {
      "id": "failing-and-retrying",
      "title": "Failing and retrying",
      "role": "content",
      "text": "`fail()` either retries the job (back to pending) or moves it to the failed list once retries are exhausted:\n\n[code example]\n\nThe attempt counter is incremented on every `claim()`, so a job that fails three times is moved to the failed list with `attempts = 3` and the final `last_error` preserved."
    },
    {
      "id": "reclaiming-stuck-jobs",
      "title": "Reclaiming stuck jobs",
      "role": "content",
      "text": "If a worker dies mid-job — the process is killed, the host loses power, the network partitions — the job sits in the processing list with a `claimed_at_ms` that never advances. A periodic call to `reclaimStuck()` walks the processing list and moves any job past the visibility timeout back to pending:\n\n[code example]\n\nThe Lua script also handles a narrower race: a worker that crashed between `BRPOPLPUSH` and writing `claimed_at_ms`. Those jobs are reclaimed after `2 × visibility_ms` using `enqueued_at_ms` as a fallback timer, so they aren't stranded forever."
    },
    {
      "id": "stats-and-history",
      "title": "Stats and history",
      "role": "content",
      "text": "`stats()` reports queue depth plus per-process counters:\n\n[code example]\n\nThe completed and failed lists are capped via `LTRIM` so they never grow unbounded; a real deployment would also write completion events to a separate audit log if needed."
    },
    {
      "id": "prerequisites",
      "title": "Prerequisites",
      "role": "content",
      "text": "Before running the demo, make sure that:\n\n* Redis 6.2 or later is running locally on the default port (6379). Earlier versions still work, since the helper uses commands that have existed since Redis 2.6.\n* Java 17 or later (the demo uses text-block-free string concatenation but still relies on a modern JDK).\n* Jedis 5.x is on the classpath. The smallest workable classpath is the Jedis jar plus its two transitive dependencies, `commons-pool2` and `slf4j-api`.\n\nIf you use Maven:\n\n[code example]\n\nIf you use Gradle:\n\n[code example]"
    },
    {
      "id": "running-the-demo",
      "title": "Running the demo",
      "role": "content",
      "text": ""
    },
    {
      "id": "get-the-source-files",
      "title": "Get the source files",
      "role": "content",
      "text": "The demo consists of five Java source files. Download them from the [`java-jedis` source folder](https://github.com/redis/docs/tree/main/content/develop/use-cases/job-queue/java-jedis) on GitHub, or grab them with `curl`:\n\n[code example]"
    },
    {
      "id": "start-the-demo-server",
      "title": "Start the demo server",
      "role": "content",
      "text": "A local demo server is included to show the queue in action ([source](https://github.com/redis/docs/blob/main/content/develop/use-cases/job-queue/java-jedis/DemoServer.java)). Compile and run with `javac` and `java`, listing each jar on the classpath:\n\n[code example]\n\nYou should see:\n\n[code example]\n\nOpen [http://127.0.0.1:8793](http://127.0.0.1:8793) in a browser. You can:\n\n* Enqueue jobs of different kinds (email, webhook, thumbnail, invoice) in batches.\n* Start a pool of workers with configurable size, work latency, and *failure* / *hang* rates. A non-zero hang rate simulates worker crashes.\n* Click **Run reclaim sweep** to move any timed-out processing jobs back to pending.\n* Watch pending / processing / completed / failed lists update every 800 ms.\n\nIf your Redis server is running elsewhere, start the demo with `--redis-host` and `--redis-port`. You can also tune the visibility timeout with `--visibility-ms`."
    },
    {
      "id": "the-mock-worker-pool",
      "title": "The mock worker pool",
      "role": "content",
      "text": "The demo includes a small `JobWorker` ([source](https://github.com/redis/docs/blob/main/content/develop/use-cases/job-queue/java-jedis/JobWorker.java)) and `WorkerPool` ([source](https://github.com/redis/docs/blob/main/content/develop/use-cases/job-queue/java-jedis/WorkerPool.java)) that stand in for whatever real background work your application would run. Each worker:\n\n* Blocks on `queue.claim()` for new jobs.\n* Sleeps `workLatencyMs` to simulate doing the work.\n* Either completes successfully, fails (calling `queue.fail()`), or *hangs* — returning without completing or failing the job so the reclaimer has to recover it.\n\nWorkers run on daemon threads spawned by the pool; an `AtomicBoolean` stop flag lets the HTTP handlers shut workers down between requests. The `failRate` and `hangRate` knobs let you watch the at-least-once delivery and reclaim behaviours from the UI without writing test code."
    },
    {
      "id": "production-usage",
      "title": "Production usage",
      "role": "content",
      "text": ""
    },
    {
      "id": "choose-a-visibility-timeout-that-matches-your-worst-case-job-latency",
      "title": "Choose a visibility timeout that matches your worst-case job latency",
      "role": "content",
      "text": "The visibility timeout has to exceed the longest real job time, with margin. If it's too short, a healthy worker that's running a slow job will get its work duplicated when the reclaimer fires. If it's too long, a real crash takes longer to detect. Most production deployments use a per-queue value tuned to the 99th-percentile job latency — for example, 2 minutes for email and 30 minutes for video transcoding."
    },
    {
      "id": "run-the-reclaimer-on-a-schedule",
      "title": "Run the reclaimer on a schedule",
      "role": "content",
      "text": "The demo only reclaims when you click the button. In production, run `reclaimStuck()` from a `ScheduledExecutorService` (every few seconds for fast queues, every minute for slow ones), or from each worker before it blocks on `claim()`. Both patterns work as long as *someone* runs the sweep."
    },
    {
      "id": "size-jedispool-for-your-worker-count",
      "title": "Size `JedisPool` for your worker count",
      "role": "content",
      "text": "`JedisPool` is thread-safe and connections are released back to the pool when the try-with-resources block exits. The demo bumps `maxTotal` to 32 to support the blocking `BRPOPLPUSH` call held by each worker plus the per-request connections used by the HTTP handlers. As a rule of thumb, `maxTotal` should be at least *workers + concurrent HTTP request threads + reclaimer threads + headroom*."
    },
    {
      "id": "use-a-separate-redis-database-or-key-prefix-per-queue",
      "title": "Use a separate Redis database or key prefix per queue",
      "role": "content",
      "text": "The helper takes a `queueName` argument so you can run multiple independent queues against one Redis instance — for example, one queue per priority level, or one per job kind. Keep queue keys under a clearly-namespaced prefix (here, `queue:jobs-jedis:*`) so they're easy to inspect and easy to clear without touching application data."
    },
    {
      "id": "cap-the-completed-and-failed-history",
      "title": "Cap the completed and failed history",
      "role": "content",
      "text": "The demo keeps the last 50 completed and 50 failed job IDs via `LTRIM`. If you need longer history for audit purposes, write completion events to a separate Redis Stream (or to an external store) and keep the in-queue history short. Stream consumer groups give you the same fan-out semantics with a much richer history."
    },
    {
      "id": "tune-maxattempts-per-job-kind",
      "title": "Tune `maxAttempts` per job kind",
      "role": "content",
      "text": "A blanket `maxAttempts = 3` is a reasonable default for transient failures (network timeouts, rate limits). Jobs that talk to non-idempotent external systems — for example, posting a Stripe charge — need either application-level idempotency keys or a much lower retry count. The helper exposes `maxAttempts` so each queue can pick its own policy."
    },
    {
      "id": "inspect-queue-state-directly-in-redis",
      "title": "Inspect queue state directly in Redis",
      "role": "content",
      "text": "Because the queue is just lists and hashes, you can inspect it with `redis-cli`:\n\n[code example]"
    },
    {
      "id": "learn-more",
      "title": "Learn more",
      "role": "related",
      "text": "This example uses the following Redis commands:\n\n* [`LPUSH`](https://redis.io/docs/latest/commands/lpush) to enqueue a job ID.\n* [`BRPOPLPUSH`](https://redis.io/docs/latest/commands/brpoplpush) to atomically claim a job into the processing list.\n* [`LREM`](https://redis.io/docs/latest/commands/lrem) to remove a job from the processing list on complete or fail.\n* [`LRANGE`](https://redis.io/docs/latest/commands/lrange) and [`LLEN`](https://redis.io/docs/latest/commands/llen) to read queue depth and list contents.\n* [`LTRIM`](https://redis.io/docs/latest/commands/ltrim) to cap the completed and failed history.\n* [`HSET`](https://redis.io/docs/latest/commands/hset) and [`HGETALL`](https://redis.io/docs/latest/commands/hgetall) for job metadata.\n* [`HINCRBY`](https://redis.io/docs/latest/commands/hincrby) for the attempt counter.\n* [`EXPIRE`](https://redis.io/docs/latest/commands/expire) for automatic cleanup of completed and failed jobs.\n* [`PUBLISH`](https://redis.io/docs/latest/commands/publish) for job-completion notifications.\n* [`EVAL`](https://redis.io/docs/latest/commands/eval) for atomic complete, fail, and reclaim flows.\n\nSee the [Jedis documentation](https://redis.io/docs/latest/develop/clients/jedis) for full client reference."
    }
  ],
  "examples": [
    {
      "id": "the-redisjobqueue-helper-ex0",
      "language": "java",
      "code": "import java.util.Map;\nimport redis.clients.jedis.JedisPool;\n\npublic class Main {\n    public static void main(String[] args) {\n        JedisPool pool = new JedisPool(\"localhost\", 6379);\n        RedisJobQueue queue = new RedisJobQueue(pool, \"jobs-jedis\", 5000, 300, 50, 3);\n\n        String jobId = queue.enqueue(Map.of(\n                \"kind\", \"email\",\n                \"recipient\", \"alice@example.com\"\n        ));\n\n        // In a worker thread:\n        RedisJobQueue.ClaimedJob job = queue.claim(1000);\n        if (job != null) {\n            try {\n                // ... run the job ...\n                queue.complete(job, Map.of(\"sent_at\", \"2026-05-11T15:00:00Z\"));\n            } catch (Exception exc) {\n                queue.fail(job, exc.getMessage());\n            }\n        }\n\n        // In a periodic sweeper:\n        java.util.List<String> reclaimed = queue.reclaimStuck();\n    }\n}",
      "section_id": "the-redisjobqueue-helper"
    },
    {
      "id": "data-model-ex0",
      "language": "text",
      "code": "queue:jobs-jedis:pending          (list)   pending job IDs, oldest at the right\nqueue:jobs-jedis:processing       (list)   claimed but not yet completed\nqueue:jobs-jedis:completed        (list)   recent successes (LTRIM-capped history)\nqueue:jobs-jedis:failed           (list)   terminally failed jobs\nqueue:jobs-jedis:job:{id}         (hash)   per-job metadata\nqueue:jobs-jedis:events           (pubsub) completion notifications",
      "section_id": "data-model"
    },
    {
      "id": "data-model-ex1",
      "language": "text",
      "code": "queue:jobs-jedis:job:9a4f...\n  id              = 9a4f...\n  payload         = {\"kind\":\"email\",\"recipient\":\"alice@example.com\"}\n  status          = pending | processing | completed | failed\n  attempts        = 1\n  enqueued_at_ms  = 1715441000000\n  claimed_at_ms   = 1715441000123\n  claim_token     = b3c0d1e2...        (per-claim random token)\n  completed_at_ms = 1715441000456\n  result          = {\"sent_at\":\"...\"}\n  last_error      = \"smtp timeout\"",
      "section_id": "data-model"
    },
    {
      "id": "enqueueing-jobs-ex0",
      "language": "java",
      "code": "public String enqueue(Map<String, Object> payload) {\n    String jobId = randomTokenHex(8);\n    long now = System.currentTimeMillis();\n    Map<String, String> meta = new LinkedHashMap<>();\n    meta.put(\"id\", jobId);\n    meta.put(\"payload\", JsonUtil.toJson(payload));\n    meta.put(\"status\", \"pending\");\n    meta.put(\"attempts\", \"0\");\n    meta.put(\"enqueued_at_ms\", Long.toString(now));\n    meta.put(\"claim_token\", \"\");\n\n    try (Jedis jedis = pool.getResource()) {\n        Pipeline pipe = jedis.pipelined();\n        pipe.hset(metaKey(jobId), meta);\n        pipe.lpush(pendingKey, jobId);\n        pipe.sync();\n    }\n    return jobId;\n}",
      "section_id": "enqueueing-jobs"
    },
    {
      "id": "claiming-jobs-with-brpoplpush-ex0",
      "language": "java",
      "code": "public ClaimedJob claim(long timeoutMs) {\n    double timeoutSec = Math.max(timeoutMs / 1000.0, 0.1);\n    String jobId;\n    try (Jedis jedis = pool.getResource()) {\n        jobId = jedis.brpoplpush(pendingKey, processingKey, (int) Math.ceil(timeoutSec));\n    }\n    if (jobId == null) {\n        return null;\n    }\n\n    String token = randomTokenHex(8);\n    long now = System.currentTimeMillis();\n    String mk = metaKey(jobId);\n    Map<String, String> meta;\n    try (Jedis jedis = pool.getResource()) {\n        Pipeline pipe = jedis.pipelined();\n        Map<String, String> updates = new LinkedHashMap<>();\n        updates.put(\"status\", \"processing\");\n        updates.put(\"claimed_at_ms\", Long.toString(now));\n        updates.put(\"claim_token\", token);\n        pipe.hset(mk, updates);\n        pipe.hincrBy(mk, \"attempts\", 1);\n        Response<Map<String, String>> resp = pipe.hgetAll(mk);\n        pipe.sync();\n        meta = resp.get();\n    }\n    // ... parse payload, attempts, and return a ClaimedJob ...\n}",
      "section_id": "claiming-jobs-with-brpoplpush"
    },
    {
      "id": "completing-jobs-ex0",
      "language": "java",
      "code": "public boolean complete(ClaimedJob job, Map<String, Object> result) {\n    List<String> keys = Arrays.asList(metaPrefix, processingKey, completedKey);\n    List<String> args = Arrays.asList(\n            job.id,\n            job.claimToken,\n            \"completed\",\n            Long.toString(System.currentTimeMillis()),\n            JsonUtil.toJson(result),\n            Integer.toString(completedTtl),\n            Integer.toString(completedHistory)\n    );\n    Object res;\n    try (Jedis jedis = pool.getResource()) {\n        res = jedis.eval(COMPLETE_SCRIPT, keys, args);\n    }\n    if (res == null || !\"1\".equals(res.toString())) {\n        return false;\n    }\n    // ... publish the completion event ...\n    return true;\n}",
      "section_id": "completing-jobs"
    },
    {
      "id": "failing-and-retrying-ex0",
      "language": "java",
      "code": "public boolean fail(ClaimedJob job, String error) {\n    boolean retry = job.attempts < maxAttempts;\n    List<String> keys = Arrays.asList(metaPrefix, processingKey, pendingKey, failedKey);\n    List<String> args = Arrays.asList(\n            job.id,\n            job.claimToken,\n            error,\n            Long.toString(System.currentTimeMillis()),\n            Integer.toString(completedTtl),\n            Integer.toString(completedHistory),\n            retry ? \"1\" : \"0\"\n    );\n    Object res;\n    try (Jedis jedis = pool.getResource()) {\n        res = jedis.eval(FAIL_SCRIPT, keys, args);\n    }\n    return res != null && !\"0\".equals(res.toString());\n}",
      "section_id": "failing-and-retrying"
    },
    {
      "id": "reclaiming-stuck-jobs-ex0",
      "language": "java",
      "code": "public List<String> reclaimStuck() {\n    List<String> keys = Arrays.asList(pendingKey, processingKey, metaPrefix);\n    List<String> args = Arrays.asList(\n            Long.toString(System.currentTimeMillis()),\n            Long.toString(visibilityMs)\n    );\n    Object res;\n    try (Jedis jedis = pool.getResource()) {\n        res = jedis.eval(RECLAIM_SCRIPT, keys, args);\n    }\n    // ... unwrap the list of reclaimed IDs ...\n}",
      "section_id": "reclaiming-stuck-jobs"
    },
    {
      "id": "stats-and-history-ex0",
      "language": "java",
      "code": "public Map<String, Object> stats() {\n    long pending, processing, completed, failed;\n    try (Jedis jedis = pool.getResource()) {\n        Pipeline pipe = jedis.pipelined();\n        Response<Long> pendingResp = pipe.llen(pendingKey);\n        Response<Long> processingResp = pipe.llen(processingKey);\n        Response<Long> completedResp = pipe.llen(completedKey);\n        Response<Long> failedResp = pipe.llen(failedKey);\n        pipe.sync();\n        pending = pendingResp.get();\n        processing = processingResp.get();\n        completed = completedResp.get();\n        failed = failedResp.get();\n    }\n    Map<String, Object> out = new LinkedHashMap<>();\n    out.put(\"enqueued_total\", enqueuedTotal);\n    out.put(\"completed_total\", completedTotal);\n    out.put(\"failed_total\", failedTotal);\n    out.put(\"reclaimed_total\", reclaimedTotal);\n    out.put(\"pending_depth\", pending);\n    out.put(\"processing_depth\", processing);\n    out.put(\"completed_depth\", completed);\n    out.put(\"failed_depth\", failed);\n    out.put(\"visibility_ms\", visibilityMs);\n    return out;\n}",
      "section_id": "stats-and-history"
    },
    {
      "id": "prerequisites-ex0",
      "language": "xml",
      "code": "<dependency>\n    <groupId>redis.clients</groupId>\n    <artifactId>jedis</artifactId>\n    <version>5.0.1</version>\n</dependency>",
      "section_id": "prerequisites"
    },
    {
      "id": "prerequisites-ex1",
      "language": "groovy",
      "code": "implementation 'redis.clients:jedis:5.0.1'",
      "section_id": "prerequisites"
    },
    {
      "id": "get-the-source-files-ex0",
      "language": "bash",
      "code": "mkdir job-queue-demo && cd job-queue-demo\nBASE=https://raw.githubusercontent.com/redis/docs/main/content/develop/use-cases/job-queue/java-jedis\ncurl -O $BASE/RedisJobQueue.java\ncurl -O $BASE/JobWorker.java\ncurl -O $BASE/WorkerPool.java\ncurl -O $BASE/DemoServer.java\ncurl -O $BASE/JsonUtil.java",
      "section_id": "get-the-source-files"
    },
    {
      "id": "start-the-demo-server-ex0",
      "language": "bash",
      "code": "javac -cp jedis-5.0.1.jar:commons-pool2-2.12.1.jar:slf4j-api-2.0.12.jar \\\n      JsonUtil.java RedisJobQueue.java JobWorker.java WorkerPool.java DemoServer.java\n\njava  -cp .:jedis-5.0.1.jar:commons-pool2-2.12.1.jar:slf4j-api-2.0.12.jar \\\n      DemoServer --port 8793 --visibility-ms 5000",
      "section_id": "start-the-demo-server"
    },
    {
      "id": "start-the-demo-server-ex1",
      "language": "text",
      "code": "Redis job-queue demo server listening on http://127.0.0.1:8793\nUsing Redis at localhost:6379\nVisibility timeout: 5000 ms",
      "section_id": "start-the-demo-server"
    },
    {
      "id": "inspect-queue-state-directly-in-redis-ex0",
      "language": "bash",
      "code": "# How many pending jobs?\nredis-cli LLEN queue:jobs-jedis:pending\n\n# Look at the next 5 jobs to be picked up.\nredis-cli LRANGE queue:jobs-jedis:pending -5 -1\n\n# Read a job's metadata.\nredis-cli HGETALL queue:jobs-jedis:job:9a4f0d1c\n\n# How many jobs are currently being processed?\nredis-cli LLEN queue:jobs-jedis:processing\n\n# Clear everything for this queue (be careful — this deletes work).\nredis-cli --scan --pattern 'queue:jobs-jedis:*' | xargs redis-cli DEL",
      "section_id": "inspect-queue-state-directly-in-redis"
    }
  ]
}
