{
  "id": "rust",
  "title": "Redis job queue with redis-rs",
  "url": "https://redis.io/docs/latest/develop/use-cases/job-queue/rust/",
  "summary": "Implement a Redis job queue in Rust with redis-rs",
  "tags": [
    "docs",
    "develop",
    "stack",
    "oss",
    "rs",
    "rc"
  ],
  "last_updated": "2026-05-14T08:58:05-05:00",
  "children": [],
  "page_type": "content",
  "content_hash": "559161b1238c674d115174110a6c3af331ca9d4d41103b3b76969008693a1fae",
  "sections": [
    {
      "id": "overview",
      "title": "Overview",
      "role": "overview",
      "text": "This guide shows you how to implement a Redis-backed job queue in Rust with the [`redis`](https://crates.io/crates/redis) crate (redis-rs). It includes a small async web server built with [`axum`](https://docs.rs/axum/) so you can enqueue jobs, watch a pool of workers drain them, and see the reclaimer recover jobs from a simulated worker crash."
    },
    {
      "id": "overview",
      "title": "Overview",
      "role": "overview",
      "text": "A job queue lets your application offload background work — sending email, processing payments, image transcoding, ML inference, webhooks — from the request path. Producers enqueue jobs in milliseconds and return to the user; workers pull from the queue and process them on their own schedule.\n\nThat gives you:\n\n* Low-latency user-facing requests, even when downstream work is slow or bursty\n* Horizontal scale across many worker processes that share one Redis instance\n* At-least-once delivery so a worker crash doesn't lose work\n* Visibility-timeout reclaim that returns stuck jobs to the queue automatically\n* Job metadata, retry counts, and completion results in Redis hashes with TTL\n\nIn this example, each job is identified by a random hex ID and its payload, status, and result live in a Redis hash under `queue:jobs:job:{id}`. Pending IDs sit in a list, claimed IDs move atomically to a *processing* list, and completed or failed IDs land in short history lists."
    },
    {
      "id": "how-it-works",
      "title": "How it works",
      "role": "content",
      "text": "The flow looks like this:\n\n1. The application calls `queue.enqueue(payload).await`\n2. The helper writes the job metadata hash and `LPUSH`es the job ID onto the pending list\n3. A worker task calls `queue.claim(timeout_ms).await`\n4. The helper runs [`BLMOVE`](https://redis.io/docs/latest/commands/blmove) to atomically move the next pending ID into the processing list and writes a per-claim `claim_token` plus `claimed_at_ms` on the hash\n5. The worker runs the job and calls `queue.complete(&job, result).await` or `queue.fail(&job, error).await`\n6. `complete` removes the job from the processing list, writes the result, and `LPUSH`es the ID onto the completed history (with `LTRIM` and an `EXPIRE` on the hash for cleanup)\n7. `fail` either retries the job (back to pending) or moves it to the failed list once retries are exhausted\n\nIf a worker dies before completing a job, the job sits in the processing list with a `claimed_at_ms` older than the visibility timeout. A periodic call to `queue.reclaim_stuck().await` finds those jobs and moves them back to pending so another worker can pick them up.\n\nEvery state change holds the token: a worker that has been reclaimed cannot later complete or fail a job another worker has already claimed."
    },
    {
      "id": "the-job-queue-helper",
      "title": "The job queue helper",
      "role": "content",
      "text": "The `RedisJobQueue` struct wraps the queue operations\n([source](https://github.com/redis/docs/blob/main/content/develop/use-cases/job-queue/rust/src/job_queue.rs)):\n\n[code example]\n\n`ConnectionManager` is a cheap-to-clone handle that reconnects automatically. Cloning the manager is the standard way to share Redis access across `tokio::spawn`ed tasks."
    },
    {
      "id": "data-model",
      "title": "Data model",
      "role": "content",
      "text": "Each job's state lives in a Redis hash plus a position in one of four lists:\n\n[code example]\n\nA job's hash carries:\n\n[code example]\n\nThe implementation uses:\n\n* [`LPUSH`](https://redis.io/docs/latest/commands/lpush) to add new job IDs to the pending list\n* [`BLMOVE`](https://redis.io/docs/latest/commands/blmove) to atomically claim a job into the processing list (the modern replacement for the deprecated `BRPOPLPUSH`)\n* [`LREM`](https://redis.io/docs/latest/commands/lrem) to remove a claimed job from the processing list on complete or fail\n* [`LTRIM`](https://redis.io/docs/latest/commands/ltrim) to cap the completed and failed history lists\n* [`HSET`](https://redis.io/docs/latest/commands/hset) / [`HGETALL`](https://redis.io/docs/latest/commands/hgetall) for job metadata\n* [`EXPIRE`](https://redis.io/docs/latest/commands/expire) on completed and failed hashes for automatic cleanup\n* [`PUBLISH`](https://redis.io/docs/latest/commands/publish) on `queue:jobs:events` for completion signalling\n* [Lua scripting](https://redis.io/docs/latest/develop/programmability/eval-intro) ([`EVALSHA`](https://redis.io/docs/latest/commands/evalsha)) for the complete, fail, and reclaim flows so each runs atomically against the processing list and metadata hash"
    },
    {
      "id": "enqueueing-jobs",
      "title": "Enqueueing jobs",
      "role": "content",
      "text": "`enqueue()` writes the metadata hash and pushes the job ID onto the pending list in one pipeline:\n\n[code example]\n\nThe payload is stored as JSON so the queue can carry arbitrary nested structures without forcing every field into a hash."
    },
    {
      "id": "claiming-jobs-with-blmove",
      "title": "Claiming jobs with BLMOVE",
      "role": "content",
      "text": "A worker blocks until a job is available, then atomically pops it from the pending list and pushes it onto the processing list. `BLMOVE` does both in a single Redis call (it's the modern replacement for `BRPOPLPUSH`, which is deprecated in Redis 6.2+):\n\n[code example]\n\nThe `claim_token` is the worker's proof of ownership for this attempt. Every subsequent state change (complete, fail) checks it before touching the processing list, so a worker that hung past the visibility timeout cannot interfere with the new claimant."
    },
    {
      "id": "completing-jobs",
      "title": "Completing jobs",
      "role": "content",
      "text": "`complete()` runs a Lua script so the processing-list removal, the metadata write, and the history push happen atomically:\n\n[code example]\n\nThe Lua script checks the token first and returns `0` if the worker no longer owns the job (because the reclaimer moved it back to pending). The metadata hash also gets an `EXPIRE` so completed jobs are cleaned up automatically."
    },
    {
      "id": "failing-and-retrying",
      "title": "Failing and retrying",
      "role": "content",
      "text": "`fail()` either retries the job (back to pending) or moves it to the failed list once retries are exhausted:\n\n[code example]\n\nThe attempt counter is incremented on every `claim()`, so a job that fails three times is moved to the failed list with `attempts = 3` and the final `last_error` preserved."
    },
    {
      "id": "reclaiming-stuck-jobs",
      "title": "Reclaiming stuck jobs",
      "role": "content",
      "text": "If a worker dies mid-job — the process is killed, the host loses power, the network partitions — the job sits in the processing list with a `claimed_at_ms` that never advances. A periodic call to `reclaim_stuck()` walks the processing list and moves any job past the visibility timeout back to pending:\n\n[code example]\n\nThe Lua script also handles a narrower race: a worker that crashed between `BLMOVE` and writing `claimed_at_ms`. Those jobs are reclaimed after `2 × visibility_ms` using `enqueued_at_ms` as a fallback timer, so they aren't stranded forever."
    },
    {
      "id": "stats-and-history",
      "title": "Stats and history",
      "role": "content",
      "text": "`stats()` reports queue depth plus per-process counters. The counters are held in `Arc<AtomicI64>` so they're cheap to read from any task that holds a clone of the queue:\n\n[code example]\n\nThe completed and failed lists are capped via `LTRIM` so they never grow unbounded; a real deployment would also write completion events to a longer-term audit log if needed."
    },
    {
      "id": "prerequisites",
      "title": "Prerequisites",
      "role": "content",
      "text": "* Redis 6.2 or later running locally on the default port (6379). `BLMOVE` requires Redis 6.2+; on older servers, replace the call with `BRPOPLPUSH`.\n* Rust 1.75 or later.\n* The [`redis`](https://crates.io/crates/redis) crate at 0.27+ (or 0.24+ if you only need `BRPOPLPUSH`-style claims).\n\nAdd the crate dependencies to your `Cargo.toml`:\n\n[code example]\n\nThe `connection-manager` feature gives you `ConnectionManager` — a cheap, cloneable, auto-reconnecting handle that's the right primitive for sharing one Redis client across many `tokio::spawn`ed tasks."
    },
    {
      "id": "running-the-demo",
      "title": "Running the demo",
      "role": "content",
      "text": ""
    },
    {
      "id": "get-the-source-files",
      "title": "Get the source files",
      "role": "content",
      "text": "The demo uses a Cargo project with sources under `src/`. Download the files from the [`rust` source folder](https://github.com/redis/docs/tree/main/content/develop/use-cases/job-queue/rust) on GitHub, or grab them with `curl`:\n\n[code example]"
    },
    {
      "id": "start-the-demo-server",
      "title": "Start the demo server",
      "role": "content",
      "text": "From that directory, build and run:\n\n[code example]\n\nYou should see:\n\n[code example]\n\nOpen [http://127.0.0.1:8798](http://127.0.0.1:8798) in a browser. You can:\n\n* Enqueue jobs of different kinds (email, webhook, thumbnail, invoice) in batches.\n* Start a pool of workers with configurable size, work latency, and *failure* / *hang* rates. A non-zero hang rate simulates worker crashes.\n* Click **Run reclaim sweep** to move any timed-out processing jobs back to pending.\n* Watch pending / processing / completed / failed lists update every 800 ms.\n\nThe demo accepts a `--visibility-ms` flag to tune the visibility timeout, and reads a `REDIS_URL` environment variable if your Redis lives somewhere other than `redis://127.0.0.1:6379/`."
    },
    {
      "id": "the-mock-worker-pool",
      "title": "The mock worker pool",
      "role": "content",
      "text": "The demo includes a small `Worker` and `WorkerPool` ([source](https://github.com/redis/docs/blob/main/content/develop/use-cases/job-queue/rust/src/worker.rs)) that stands in for whatever real background work your application would run. Each worker is a `tokio::spawn`ed task that:\n\n* Blocks on `queue.claim(500).await` for new jobs.\n* `tokio::time::sleep`s `work_latency_ms` to simulate doing the work.\n* Either completes successfully, fails (calling `queue.fail()`), or *hangs* — returning without completing or failing the job so the reclaimer has to recover it.\n\nThe pool's shutdown channel is a `tokio::sync::watch::Receiver<bool>`. Calling `pool.stop()` flips the watch value to `true`; each worker checks it before the next `claim()`. The pool's `WorkerConfig` lives behind a `tokio::sync::Mutex` so the HTTP `/workers/configure` handler can update knobs without restarting the workers.\n\nThe `fail_rate` and `hang_rate` knobs let you watch the at-least-once delivery and reclaim behaviours from the UI without writing test code."
    },
    {
      "id": "production-usage",
      "title": "Production usage",
      "role": "content",
      "text": ""
    },
    {
      "id": "choose-a-visibility-timeout-that-matches-your-worst-case-job-latency",
      "title": "Choose a visibility timeout that matches your worst-case job latency",
      "role": "content",
      "text": "The visibility timeout has to exceed the longest real job time, with margin. If it's too short, a healthy worker that's running a slow job will get its work duplicated when the reclaimer fires. If it's too long, a real crash takes longer to detect. Most production deployments use a per-queue value tuned to the 99th-percentile job latency — for example, 2 minutes for email and 30 minutes for video transcoding."
    },
    {
      "id": "run-the-reclaimer-on-a-schedule",
      "title": "Run the reclaimer on a schedule",
      "role": "content",
      "text": "The demo only reclaims when you click the button. In production, run `reclaim_stuck()` from a `tokio::time::interval` loop (every few seconds for fast queues, every minute for slow ones), or from each worker before it blocks on `claim()`. Both patterns work as long as *someone* runs the sweep."
    },
    {
      "id": "share-one-connectionmanager-across-tasks",
      "title": "Share one `ConnectionManager` across tasks",
      "role": "content",
      "text": "`ConnectionManager` is cheap to `clone()` — internally it's an `Arc` around the real connection — and it handles automatic reconnection on transient failures. The helper struct stores one and clones it inside every async method, so a `WorkerPool` of 32 workers still uses a single underlying multiplexed connection."
    },
    {
      "id": "use-a-separate-redis-database-or-key-prefix-per-queue",
      "title": "Use a separate Redis database or key prefix per queue",
      "role": "content",
      "text": "The helper takes a `queue_name` argument so you can run multiple independent queues against one Redis instance — for example, one queue per priority level, or one per job kind. Keep queue keys under a clearly-namespaced prefix (here, `queue:jobs:*`) so they're easy to inspect and easy to clear without touching application data."
    },
    {
      "id": "cap-the-completed-and-failed-history",
      "title": "Cap the completed and failed history",
      "role": "content",
      "text": "The demo keeps the last 50 completed and 50 failed job IDs via `LTRIM`. If you need longer history for audit purposes, write completion events to a separate Redis Stream (or to an external store) and keep the in-queue history short. Stream consumer groups give you the same fan-out semantics with a much richer history."
    },
    {
      "id": "tune-max-attempts-per-job-kind",
      "title": "Tune `max_attempts` per job kind",
      "role": "content",
      "text": "A blanket `max_attempts = 3` is a reasonable default for transient failures (network timeouts, rate limits). Jobs that talk to non-idempotent external systems — for example, posting a Stripe charge — need either application-level idempotency keys or a much lower retry count. The helper exposes `max_attempts` on `JobQueueOptions` so each queue can pick its own policy."
    },
    {
      "id": "inspect-queue-state-directly-in-redis",
      "title": "Inspect queue state directly in Redis",
      "role": "content",
      "text": "Because the queue is just lists and hashes, you can inspect it with `redis-cli`:\n\n[code example]"
    },
    {
      "id": "learn-more",
      "title": "Learn more",
      "role": "related",
      "text": "This example uses the following Redis commands:\n\n* [`LPUSH`](https://redis.io/docs/latest/commands/lpush) to enqueue a job ID.\n* [`BLMOVE`](https://redis.io/docs/latest/commands/blmove) to atomically claim a job into the processing list.\n* [`LREM`](https://redis.io/docs/latest/commands/lrem) to remove a job from the processing list on complete or fail.\n* [`LRANGE`](https://redis.io/docs/latest/commands/lrange) and [`LLEN`](https://redis.io/docs/latest/commands/llen) to read queue depth and list contents.\n* [`LTRIM`](https://redis.io/docs/latest/commands/ltrim) to cap the completed and failed history.\n* [`HSET`](https://redis.io/docs/latest/commands/hset) and [`HGETALL`](https://redis.io/docs/latest/commands/hgetall) for job metadata.\n* [`HINCRBY`](https://redis.io/docs/latest/commands/hincrby) for the attempt counter.\n* [`EXPIRE`](https://redis.io/docs/latest/commands/expire) for automatic cleanup of completed and failed jobs.\n* [`PUBLISH`](https://redis.io/docs/latest/commands/publish) for job-completion notifications.\n* [`EVALSHA`](https://redis.io/docs/latest/commands/evalsha) for atomic complete, fail, and reclaim flows.\n\nSee the [`redis-rs` crate documentation](https://docs.rs/redis/) for the client reference."
    }
  ],
  "examples": [
    {
      "id": "the-job-queue-helper-ex0",
      "language": "rust",
      "code": "use redis::aio::ConnectionManager;\nuse redis::Client;\nuse serde_json::json;\n\nuse jobqueue_demo::job_queue::{JobQueueOptions, RedisJobQueue};\n\n#[tokio::main]\nasync fn main() -> redis::RedisResult<()> {\n    let client = Client::open(\"redis://127.0.0.1:6379/\")?;\n    let conn = ConnectionManager::new(client).await?;\n    let queue = RedisJobQueue::new(\n        conn,\n        JobQueueOptions {\n            queue_name: \"jobs\".to_string(),\n            visibility_ms: 5000,\n            ..Default::default()\n        },\n    );\n\n    let job_id = queue\n        .enqueue(json!({\"kind\": \"email\", \"recipient\": \"alice@example.com\"}))\n        .await?;\n    println!(\"enqueued {}\", job_id);\n\n    // In a worker task:\n    if let Some(job) = queue.claim(1000).await? {\n        // ... run the job ...\n        queue\n            .complete(&job, json!({\"sent_at\": \"2026-05-11T15:00:00Z\"}))\n            .await?;\n    }\n\n    // In a periodic sweeper:\n    let reclaimed = queue.reclaim_stuck().await?;\n    println!(\"reclaimed {} job(s)\", reclaimed.len());\n\n    Ok(())\n}",
      "section_id": "the-job-queue-helper"
    },
    {
      "id": "data-model-ex0",
      "language": "text",
      "code": "queue:jobs:pending          (list)   pending job IDs, oldest at the right\nqueue:jobs:processing       (list)   claimed but not yet completed\nqueue:jobs:completed        (list)   recent successes (LTRIM-capped history)\nqueue:jobs:failed           (list)   terminally failed jobs\nqueue:jobs:job:{id}         (hash)   per-job metadata\nqueue:jobs:events           (pubsub) completion notifications",
      "section_id": "data-model"
    },
    {
      "id": "data-model-ex1",
      "language": "text",
      "code": "queue:jobs:job:9a4f...\n  id              = 9a4f...\n  payload         = {\"kind\":\"email\",\"recipient\":\"alice@example.com\"}\n  status          = pending | processing | completed | failed\n  attempts        = 1\n  enqueued_at_ms  = 1715441000000\n  claimed_at_ms   = 1715441000123\n  claim_token     = b3c0d1e2...        (per-claim random token)\n  completed_at_ms = 1715441000456\n  result          = {\"sent_at\":\"...\"}\n  last_error      = \"smtp timeout\"",
      "section_id": "data-model"
    },
    {
      "id": "enqueueing-jobs-ex0",
      "language": "rust",
      "code": "pub async fn enqueue(&self, payload: Value) -> RedisResult<String> {\n    let job_id = Self::token_hex(8);\n    let now_ms = Self::now_ms();\n    let meta_key = self.meta_key(&job_id);\n    let payload_str = serde_json::to_string(&payload).unwrap_or_else(|_| \"{}\".to_string());\n\n    let fields: Vec<(&str, String)> = vec![\n        (\"id\", job_id.clone()),\n        (\"payload\", payload_str),\n        (\"status\", \"pending\".to_string()),\n        (\"attempts\", \"0\".to_string()),\n        (\"enqueued_at_ms\", now_ms.to_string()),\n        (\"claim_token\", \"\".to_string()),\n    ];\n\n    let mut conn = self.conn.clone();\n    redis::pipe()\n        .atomic()\n        .hset_multiple(&meta_key, &fields)\n        .ignore()\n        .lpush(&self.pending_key, &job_id)\n        .ignore()\n        .query_async::<_, ()>(&mut conn)\n        .await?;\n\n    self.enqueued_total.fetch_add(1, Ordering::Relaxed);\n    Ok(job_id)\n}",
      "section_id": "enqueueing-jobs"
    },
    {
      "id": "claiming-jobs-with-blmove-ex0",
      "language": "rust",
      "code": "pub async fn claim(&self, timeout_ms: u64) -> RedisResult<Option<ClaimedJob>> {\n    let timeout_secs = (timeout_ms as f64 / 1000.0).max(0.1);\n\n    let mut conn = self.conn.clone();\n    let job_id: Option<String> = redis::cmd(\"BLMOVE\")\n        .arg(&self.pending_key)\n        .arg(&self.processing_key)\n        .arg(\"RIGHT\")\n        .arg(\"LEFT\")\n        .arg(timeout_secs)\n        .query_async(&mut conn)\n        .await?;\n\n    let job_id = match job_id { Some(id) => id, None => return Ok(None) };\n\n    let token = Self::token_hex(8);\n    let now_ms = Self::now_ms();\n    let meta_key = self.meta_key(&job_id);\n\n    let claim_fields: Vec<(&str, String)> = vec![\n        (\"status\", \"processing\".to_string()),\n        (\"claimed_at_ms\", now_ms.to_string()),\n        (\"claim_token\", token.clone()),\n    ];\n\n    let (_, _, meta): ((), i64, HashMap<String, String>) = redis::pipe()\n        .atomic()\n        .hset_multiple(&meta_key, &claim_fields)\n        .hincr(&meta_key, \"attempts\", 1)\n        .hgetall(&meta_key)\n        .query_async(&mut conn)\n        .await?;\n\n    let payload = meta\n        .get(\"payload\")\n        .and_then(|raw| serde_json::from_str::<Value>(raw).ok())\n        .unwrap_or_else(|| json!({}));\n    let attempts = meta\n        .get(\"attempts\")\n        .and_then(|raw| raw.parse::<i64>().ok())\n        .unwrap_or(1);\n\n    Ok(Some(ClaimedJob { id: job_id, payload, attempts, claim_token: token }))\n}",
      "section_id": "claiming-jobs-with-blmove"
    },
    {
      "id": "completing-jobs-ex0",
      "language": "rust",
      "code": "pub async fn complete(&self, job: &ClaimedJob, result: Value) -> RedisResult<bool> {\n    let result_str = serde_json::to_string(&result).unwrap_or_else(|_| \"{}\".to_string());\n    let now_ms = Self::now_ms();\n\n    let mut conn = self.conn.clone();\n    let ok: i64 = Script::new(COMPLETE_SCRIPT)\n        .key(&self.meta_prefix)\n        .key(&self.processing_key)\n        .key(&self.completed_key)\n        .arg(&job.id)\n        .arg(&job.claim_token)\n        .arg(\"completed\")\n        .arg(now_ms)\n        .arg(result_str)\n        .arg(self.completed_ttl as i64)\n        .arg(self.completed_history)\n        .invoke_async(&mut conn)\n        .await?;\n\n    if ok == 0 { return Ok(false); }\n\n    let event = json!({\"id\": job.id, \"status\": \"completed\"}).to_string();\n    let _: i64 = conn.publish(&self.events_channel, event).await?;\n    self.completed_total.fetch_add(1, Ordering::Relaxed);\n    Ok(true)\n}",
      "section_id": "completing-jobs"
    },
    {
      "id": "failing-and-retrying-ex0",
      "language": "rust",
      "code": "pub async fn fail(&self, job: &ClaimedJob, error: &str) -> RedisResult<bool> {\n    let retry = job.attempts < self.max_attempts;\n    let now_ms = Self::now_ms();\n    let retry_arg = if retry { \"1\" } else { \"0\" };\n\n    let mut conn = self.conn.clone();\n    let result: i64 = Script::new(FAIL_SCRIPT)\n        .key(&self.meta_prefix)\n        .key(&self.processing_key)\n        .key(&self.pending_key)\n        .key(&self.failed_key)\n        .arg(&job.id)\n        .arg(&job.claim_token)\n        .arg(error)\n        .arg(now_ms)\n        .arg(self.completed_ttl as i64)\n        .arg(self.completed_history)\n        .arg(retry_arg)\n        .invoke_async(&mut conn)\n        .await?;\n\n    Ok(result != 0)\n}",
      "section_id": "failing-and-retrying"
    },
    {
      "id": "reclaiming-stuck-jobs-ex0",
      "language": "rust",
      "code": "pub async fn reclaim_stuck(&self) -> RedisResult<Vec<String>> {\n    let now_ms = Self::now_ms();\n    let mut conn = self.conn.clone();\n    let reclaimed: Vec<String> = Script::new(RECLAIM_SCRIPT)\n        .key(&self.pending_key)\n        .key(&self.processing_key)\n        .key(&self.meta_prefix)\n        .arg(now_ms)\n        .arg(self.visibility_ms as i64)\n        .invoke_async(&mut conn)\n        .await?;\n\n    if !reclaimed.is_empty() {\n        self.reclaimed_total\n            .fetch_add(reclaimed.len() as i64, Ordering::Relaxed);\n    }\n    Ok(reclaimed)\n}",
      "section_id": "reclaiming-stuck-jobs"
    },
    {
      "id": "stats-and-history-ex0",
      "language": "rust",
      "code": "pub async fn stats(&self) -> RedisResult<Value> {\n    let mut conn = self.conn.clone();\n    let (pending, processing, completed, failed): (i64, i64, i64, i64) = redis::pipe()\n        .atomic()\n        .llen(&self.pending_key)\n        .llen(&self.processing_key)\n        .llen(&self.completed_key)\n        .llen(&self.failed_key)\n        .query_async(&mut conn)\n        .await?;\n\n    Ok(json!({\n        \"enqueued_total\": self.enqueued_total.load(Ordering::Relaxed),\n        \"completed_total\": self.completed_total.load(Ordering::Relaxed),\n        \"failed_total\": self.failed_total.load(Ordering::Relaxed),\n        \"reclaimed_total\": self.reclaimed_total.load(Ordering::Relaxed),\n        \"pending_depth\": pending,\n        \"processing_depth\": processing,\n        \"completed_depth\": completed,\n        \"failed_depth\": failed,\n        \"visibility_ms\": self.visibility_ms,\n    }))\n}",
      "section_id": "stats-and-history"
    },
    {
      "id": "prerequisites-ex0",
      "language": "toml",
      "code": "[dependencies]\nredis = { version = \"0.27\", features = [\"tokio-comp\", \"aio\", \"connection-manager\"] }\ntokio = { version = \"1\", features = [\"full\"] }\naxum = \"0.7\"\nserde = { version = \"1.0\", features = [\"derive\"] }\nserde_json = \"1.0\"\nrand = \"0.8\"",
      "section_id": "prerequisites"
    },
    {
      "id": "get-the-source-files-ex0",
      "language": "bash",
      "code": "mkdir -p job-queue-demo/src && cd job-queue-demo\nBASE=https://raw.githubusercontent.com/redis/docs/main/content/develop/use-cases/job-queue/rust\ncurl -O $BASE/Cargo.toml\ncurl -O $BASE/Cargo.lock\ncurl -o src/job_queue.rs $BASE/src/job_queue.rs\ncurl -o src/worker.rs $BASE/src/worker.rs\ncurl -o src/main.rs $BASE/src/main.rs",
      "section_id": "get-the-source-files"
    },
    {
      "id": "start-the-demo-server-ex0",
      "language": "bash",
      "code": "cargo run --release",
      "section_id": "start-the-demo-server"
    },
    {
      "id": "start-the-demo-server-ex1",
      "language": "text",
      "code": "Redis job-queue demo server listening on http://127.0.0.1:8798\nUsing Redis at redis://127.0.0.1:6379/\nVisibility timeout: 5000 ms",
      "section_id": "start-the-demo-server"
    },
    {
      "id": "inspect-queue-state-directly-in-redis-ex0",
      "language": "bash",
      "code": "# How many pending jobs?\nredis-cli LLEN queue:jobs:pending\n\n# Look at the next 5 jobs to be picked up.\nredis-cli LRANGE queue:jobs:pending -5 -1\n\n# Read a job's metadata.\nredis-cli HGETALL queue:jobs:job:9a4f0d1c\n\n# How many jobs are currently being processed?\nredis-cli LLEN queue:jobs:processing\n\n# Clear everything for this queue (be careful — this deletes work).\nredis-cli --scan --pattern 'queue:jobs:*' | xargs redis-cli DEL",
      "section_id": "inspect-queue-state-directly-in-redis"
    }
  ]
}
