{
  "id": "php",
  "title": "Redis job queue with Predis",
  "url": "https://redis.io/docs/latest/develop/use-cases/job-queue/php/",
  "summary": "Implement a Redis job queue in PHP with Predis",
  "tags": [
    "docs",
    "develop",
    "stack",
    "oss",
    "rs",
    "rc"
  ],
  "last_updated": "2026-05-14T08:58:05-05:00",
  "children": [],
  "page_type": "content",
  "content_hash": "909e284f0a7af80660c6c97a9023fa8bab07f01bc1ed40259a344eecea39b2b1",
  "sections": [
    {
      "id": "overview",
      "title": "Overview",
      "role": "overview",
      "text": "This guide shows you how to implement a Redis-backed job queue in PHP with [Predis](https://github.com/predis/predis). It includes a small local web server built on PHP's built-in dev server so you can enqueue jobs, watch a pool of workers drain them, and see the reclaimer recover jobs from a simulated worker crash."
    },
    {
      "id": "overview",
      "title": "Overview",
      "role": "overview",
      "text": "A job queue lets your application offload background work — sending email, processing payments, image transcoding, ML inference, webhooks — from the request path. Producers enqueue jobs in milliseconds and return to the user; workers pull from the queue and process them on their own schedule.\n\nThat gives you:\n\n* Low-latency user-facing requests, even when downstream work is slow or bursty\n* Horizontal scale across many worker processes that share one Redis instance\n* At-least-once delivery so a worker crash doesn't lose work\n* Visibility-timeout reclaim that returns stuck jobs to the queue automatically\n* Job metadata, retry counts, and completion results in Redis hashes with TTL\n\nIn this example, each job is identified by a random hex ID and its payload, status, and result live in a Redis hash under `queue:jobs:job:{id}`. Pending IDs sit in a list, claimed IDs move atomically to a *processing* list, and completed or failed IDs land in short history lists."
    },
    {
      "id": "how-it-works",
      "title": "How it works",
      "role": "content",
      "text": "The flow looks like this:\n\n1. The application calls `$queue->enqueue($payload)`\n2. The helper writes the job metadata hash and `LPUSH`es the job ID onto the pending list\n3. A worker calls `$queue->claim($timeoutMs)`\n4. The helper runs `BRPOPLPUSH` to atomically move the next pending ID into the processing list and writes a per-claim `claim_token` plus `claimed_at_ms` on the hash\n5. The worker runs the job and calls `$queue->complete($job, $result)` or `$queue->fail($job, $error)`\n6. `complete` removes the job from the processing list, writes the result, and `LPUSH`es the ID onto the completed history (with `LTRIM` and an `EXPIRE` on the hash for cleanup)\n7. `fail` either retries the job (back to pending) or moves it to the failed list once retries are exhausted\n\nIf a worker dies before completing a job, the job sits in the processing list with a `claimed_at_ms` older than the visibility timeout. A periodic call to `$queue->reclaimStuck()` finds those jobs and moves them back to pending so another worker can pick them up.\n\nEvery state change holds the token: a worker that has been reclaimed cannot later complete or fail a job another worker has already claimed."
    },
    {
      "id": "the-job-queue-helper",
      "title": "The job queue helper",
      "role": "content",
      "text": "The `JobQueue` class wraps the queue operations ([source](https://github.com/redis/docs/blob/main/content/develop/use-cases/job-queue/php/JobQueue.php)):\n\n[code example]"
    },
    {
      "id": "data-model",
      "title": "Data model",
      "role": "content",
      "text": "Each job's state lives in a Redis hash plus a position in one of four lists:\n\n[code example]\n\nA job's hash carries:\n\n[code example]\n\nBecause PHP's built-in dev server runs each HTTP request in a fresh process, per-process counters (`enqueued_total`, `completed_total`, etc.) can't live in object properties — they wouldn't survive between requests. Instead the helper stores them in a Redis hash under `demo:queue_stats:{queueName}`, incremented with `HINCRBY` on each state change.\n\nThe implementation uses:\n\n* [`LPUSH`](https://redis.io/docs/latest/commands/lpush) to add new job IDs to the pending list\n* [`BRPOPLPUSH`](https://redis.io/docs/latest/commands/brpoplpush) to atomically claim a job into the processing list\n* [`LREM`](https://redis.io/docs/latest/commands/lrem) to remove a claimed job from the processing list on complete or fail\n* [`LTRIM`](https://redis.io/docs/latest/commands/ltrim) to cap the completed and failed history lists\n* [`HSET`](https://redis.io/docs/latest/commands/hset) / [`HGETALL`](https://redis.io/docs/latest/commands/hgetall) for job metadata\n* [`EXPIRE`](https://redis.io/docs/latest/commands/expire) on completed and failed hashes for automatic cleanup\n* [`PUBLISH`](https://redis.io/docs/latest/commands/publish) on `queue:jobs:events` for completion signalling\n* [Lua scripting](https://redis.io/docs/latest/develop/programmability/eval-intro) for the complete, fail, and reclaim flows so each runs atomically against the processing list and metadata hash"
    },
    {
      "id": "enqueueing-jobs",
      "title": "Enqueueing jobs",
      "role": "content",
      "text": "`enqueue()` writes the metadata hash and pushes the job ID onto the pending list in one pipeline:\n\n[code example]\n\nThe payload is stored as JSON so the queue can carry arbitrary nested structures without forcing every field into a hash. The `flattenFields()` helper turns the associative `$meta` array into the variadic `field, value, field, value` argument list that Predis's `hset()` expects in Predis 3.x."
    },
    {
      "id": "claiming-jobs-with-brpoplpush",
      "title": "Claiming jobs with BRPOPLPUSH",
      "role": "content",
      "text": "A worker blocks until a job is available, then atomically pops it from the pending list and pushes it onto the processing list. `BRPOPLPUSH` does both in a single Redis call:\n\n[code example]\n\nThe `claim_token` is the worker's proof of ownership for this attempt. Every subsequent state change (complete, fail) checks it before touching the processing list, so a worker that hung past the visibility timeout cannot interfere with the new claimant.\n\nPredis exposes `BRPOPLPUSH` directly and accepts a whole-second timeout; sub-second blocking would need either a custom command or a non-blocking poll loop."
    },
    {
      "id": "completing-jobs",
      "title": "Completing jobs",
      "role": "content",
      "text": "`complete()` runs a Lua script via `EVAL` so the processing-list removal, the metadata write, and the history push happen atomically:\n\n[code example]\n\nThe Lua script checks the token first and returns `0` if the worker no longer owns the job (because the reclaimer moved it back to pending). The metadata hash also gets an `EXPIRE` so completed jobs are cleaned up automatically."
    },
    {
      "id": "failing-and-retrying",
      "title": "Failing and retrying",
      "role": "content",
      "text": "`fail()` either retries the job (back to pending) or moves it to the failed list once retries are exhausted:\n\n[code example]\n\nThe attempt counter is incremented on every `claim()`, so a job that fails three times is moved to the failed list with `attempts = 3` and the final `last_error` preserved."
    },
    {
      "id": "reclaiming-stuck-jobs",
      "title": "Reclaiming stuck jobs",
      "role": "content",
      "text": "If a worker dies mid-job — the process is killed, the host loses power, the network partitions — the job sits in the processing list with a `claimed_at_ms` that never advances. A periodic call to `reclaimStuck()` walks the processing list and moves any job past the visibility timeout back to pending:\n\n[code example]\n\nThe Lua script also handles a narrower race: a worker that crashed between `BRPOPLPUSH` and writing `claimed_at_ms`. Those jobs are reclaimed after `2 × visibility_ms` using `enqueued_at_ms` as a fallback timer, so they aren't stranded forever."
    },
    {
      "id": "stats-and-history",
      "title": "Stats and history",
      "role": "content",
      "text": "`stats()` reports queue depth plus the cross-process counters:\n\n[code example]\n\nThe completed and failed lists are capped via `LTRIM` so they never grow unbounded; a real deployment would also write completion events to a separate Redis Stream or audit store if it needs longer history."
    },
    {
      "id": "prerequisites",
      "title": "Prerequisites",
      "role": "content",
      "text": "* Redis 6.2 or later running locally on the default port (6379). Earlier versions still work, since the helper uses commands that have existed since Redis 2.6.\n* PHP 8.1 or later, with the `posix` and `pcntl` extensions enabled (both ship with the official PHP binary on macOS and most Linux distros).\n* The Predis client (3.x). Install it with [Composer](https://getcomposer.org/):\n\n  [code example]"
    },
    {
      "id": "running-the-demo",
      "title": "Running the demo",
      "role": "content",
      "text": ""
    },
    {
      "id": "get-the-source-files",
      "title": "Get the source files",
      "role": "content",
      "text": "The demo consists of six files. Download them from the [`php` source folder](https://github.com/redis/docs/tree/main/content/develop/use-cases/job-queue/php) on GitHub, or grab them with `curl`:\n\n[code example]\n\nThen install dependencies:\n\n[code example]"
    },
    {
      "id": "start-the-demo-server",
      "title": "Start the demo server",
      "role": "content",
      "text": "From that directory:\n\n[code example]\n\nYou should see:\n\n[code example]\n\nOpen [http://127.0.0.1:8796](http://127.0.0.1:8796) in a browser. You can:\n\n* Enqueue jobs of different kinds (email, webhook, thumbnail, invoice) in batches.\n* Start a pool of workers with configurable size, work latency, and *failure* / *hang* rates. A non-zero hang rate simulates worker crashes.\n* Click **Run reclaim sweep** to move any timed-out processing jobs back to pending.\n* Watch pending / processing / completed / failed lists update every 800 ms.\n\nTo point the demo at a different Redis instance, set `REDIS_HOST`, `REDIS_PORT`, and `VISIBILITY_MS` before launching the server:\n\n[code example]"
    },
    {
      "id": "the-worker-process-and-supervisor",
      "title": "The worker process and supervisor",
      "role": "content",
      "text": "The demo uses two files that together stand in for whatever real background work your application would run:\n\n* [`JobWorker.php`](https://github.com/redis/docs/blob/main/content/develop/use-cases/job-queue/php/JobWorker.php) — the `JobWorker` class. A worker calls `$queue->claim(500)`, sleeps `workLatencyMs` to simulate doing the work, then either completes the job, fails it, or *hangs* — returning without completing or failing the job so the reclaimer has to recover it.\n* [`worker.php`](https://github.com/redis/docs/blob/main/content/develop/use-cases/job-queue/php/worker.php) — a CLI entry point that constructs a `JobQueue` and a `JobWorker` from command-line flags, then calls `$worker->run()` until SIGTERM. Run one manually like this:\n\n  [code example]\n\nWhen the UI's **Start / apply** button is clicked, the demo server spawns one `worker.php` process per worker through the `WorkerSupervisor` ([source](https://github.com/redis/docs/blob/main/content/develop/use-cases/job-queue/php/WorkerSupervisor.php)). The supervisor:\n\n* Builds the worker command line with the requested size, latency, and failure / hang rates.\n* Launches each worker via `proc_open()` so it survives the `php -S` request that started it.\n* Records each PID under `demo:workers:pids` in Redis. The next HTTP request — running in a fresh PHP process — reads that hash to learn which workers are alive.\n* Sends `SIGTERM` via `posix_kill()` when the **Stop workers** button is clicked.\n\nThe `fail_rate` and `hang_rate` knobs let you watch the at-least-once delivery and reclaim behaviours from the UI without writing test code."
    },
    {
      "id": "production-usage",
      "title": "Production usage",
      "role": "content",
      "text": ""
    },
    {
      "id": "don-t-try-to-host-workers-inside-the-web-server",
      "title": "Don't try to host workers inside the web server",
      "role": "content",
      "text": "PHP's traditional one-process-per-request model — the same one `php -S` uses for this demo — means worker threads or in-process pools die with the request that started them. In production, run workers as **separate long-lived processes**:\n\n* A systemd unit (`Type=simple`, `Restart=always`) per worker.\n* A container per worker scaled by Kubernetes, ECS, or Nomad.\n* A supervisor like Supervisord or Horizon driving N copies of `worker.php`.\n\nWhichever way you ship workers, they connect to Redis directly and never depend on the web tier being up."
    },
    {
      "id": "choose-a-visibility-timeout-that-matches-your-worst-case-job-latency",
      "title": "Choose a visibility timeout that matches your worst-case job latency",
      "role": "content",
      "text": "The visibility timeout has to exceed the longest real job time, with margin. If it's too short, a healthy worker that's running a slow job will get its work duplicated when the reclaimer fires. If it's too long, a real crash takes longer to detect. Most production deployments use a per-queue value tuned to the 99th-percentile job latency — for example, 2 minutes for email and 30 minutes for video transcoding."
    },
    {
      "id": "run-the-reclaimer-on-a-schedule",
      "title": "Run the reclaimer on a schedule",
      "role": "content",
      "text": "The demo only reclaims when you click the button. In production, run `$queue->reclaimStuck()` from a periodic task (every few seconds for fast queues, every minute for slow ones), or from each worker before it blocks on `claim()`. Both patterns work as long as *someone* runs the sweep. A small `php -r '... while (true) { $queue->reclaimStuck(); sleep(5); }'` loop run under systemd is enough for most deployments."
    },
    {
      "id": "use-a-separate-redis-database-or-key-prefix-per-queue",
      "title": "Use a separate Redis database or key prefix per queue",
      "role": "content",
      "text": "The helper takes a `$queueName` argument so you can run multiple independent queues against one Redis instance — for example, one queue per priority level, or one per job kind. Keep queue keys under a clearly-namespaced prefix (here, `queue:jobs:*`) so they're easy to inspect and easy to clear without touching application data."
    },
    {
      "id": "cap-the-completed-and-failed-history",
      "title": "Cap the completed and failed history",
      "role": "content",
      "text": "The demo keeps the last 50 completed and 50 failed job IDs via `LTRIM`. If you need longer history for audit purposes, write completion events to a separate Redis Stream (or to an external store) and keep the in-queue history short. Stream consumer groups give you the same fan-out semantics with a much richer history."
    },
    {
      "id": "tune-maxattempts-per-job-kind",
      "title": "Tune `maxAttempts` per job kind",
      "role": "content",
      "text": "A blanket `maxAttempts = 3` is a reasonable default for transient failures (network timeouts, rate limits). Jobs that talk to non-idempotent external systems — for example, posting a Stripe charge — need either application-level idempotency keys or a much lower retry count. The helper exposes `maxAttempts` so each queue can pick its own policy."
    },
    {
      "id": "use-a-persistent-predis-connection-per-worker",
      "title": "Use a persistent Predis connection per worker",
      "role": "content",
      "text": "Predis opens a new TCP connection on first use and reuses it for the life of the `Client` object. Workers are long-lived, so this is already what you want. Don't construct a fresh `Predis\\Client` inside the `run()` loop — let the worker own a single connection and reuse it across thousands of `claim()` calls."
    },
    {
      "id": "inspect-queue-state-directly-in-redis",
      "title": "Inspect queue state directly in Redis",
      "role": "content",
      "text": "Because the queue is just lists and hashes, you can inspect it with `redis-cli`:\n\n[code example]"
    },
    {
      "id": "learn-more",
      "title": "Learn more",
      "role": "related",
      "text": "This example uses the following Redis commands:\n\n* [`LPUSH`](https://redis.io/docs/latest/commands/lpush) to enqueue a job ID.\n* [`BRPOPLPUSH`](https://redis.io/docs/latest/commands/brpoplpush) to atomically claim a job into the processing list.\n* [`LREM`](https://redis.io/docs/latest/commands/lrem) to remove a job from the processing list on complete or fail.\n* [`LRANGE`](https://redis.io/docs/latest/commands/lrange) and [`LLEN`](https://redis.io/docs/latest/commands/llen) to read queue depth and list contents.\n* [`LTRIM`](https://redis.io/docs/latest/commands/ltrim) to cap the completed and failed history.\n* [`HSET`](https://redis.io/docs/latest/commands/hset) and [`HGETALL`](https://redis.io/docs/latest/commands/hgetall) for job metadata.\n* [`HINCRBY`](https://redis.io/docs/latest/commands/hincrby) for the attempt counter and the cross-request stats counters.\n* [`EXPIRE`](https://redis.io/docs/latest/commands/expire) for automatic cleanup of completed and failed jobs.\n* [`PUBLISH`](https://redis.io/docs/latest/commands/publish) for job-completion notifications.\n* [`EVAL`](https://redis.io/docs/latest/commands/eval) for atomic complete, fail, and reclaim flows.\n\nSee the [Predis README](https://github.com/predis/predis) for full client reference."
    }
  ],
  "examples": [
    {
      "id": "the-job-queue-helper-ex0",
      "language": "php",
      "code": "require __DIR__ . '/vendor/autoload.php';\nrequire __DIR__ . '/JobQueue.php';\n\nuse Predis\\Client as PredisClient;\n\n$redis = new PredisClient(['host' => '127.0.0.1', 'port' => 6379]);\n$queue = new JobQueue($redis, 'jobs', 5000);\n\n$jobId = $queue->enqueue(['kind' => 'email', 'recipient' => 'alice@example.com']);\n\n// In a worker process:\n$job = $queue->claim(1000);\nif ($job !== null) {\n    try {\n        // ... run the job ...\n        $queue->complete($job, ['sent_at' => date('c')]);\n    } catch (\\Throwable $exc) {\n        $queue->fail($job, $exc->getMessage());\n    }\n}\n\n// In a periodic sweeper:\n$reclaimed = $queue->reclaimStuck();",
      "section_id": "the-job-queue-helper"
    },
    {
      "id": "data-model-ex0",
      "language": "text",
      "code": "queue:jobs:pending          (list)   pending job IDs, oldest at the right\nqueue:jobs:processing       (list)   claimed but not yet completed\nqueue:jobs:completed        (list)   recent successes (LTRIM-capped history)\nqueue:jobs:failed           (list)   terminally failed jobs\nqueue:jobs:job:{id}         (hash)   per-job metadata\nqueue:jobs:events           (pubsub) completion notifications",
      "section_id": "data-model"
    },
    {
      "id": "data-model-ex1",
      "language": "text",
      "code": "queue:jobs:job:9a4f...\n  id              = 9a4f...\n  payload         = {\"kind\":\"email\",\"recipient\":\"alice@example.com\"}\n  status          = pending | processing | completed | failed\n  attempts        = 1\n  enqueued_at_ms  = 1715441000000\n  claimed_at_ms   = 1715441000123\n  claim_token     = b3c0d1e2...        (per-claim random token)\n  completed_at_ms = 1715441000456\n  result          = {\"sent_at\":\"...\"}\n  last_error      = \"smtp timeout\"",
      "section_id": "data-model"
    },
    {
      "id": "enqueueing-jobs-ex0",
      "language": "php",
      "code": "public function enqueue(array $payload): string\n{\n    $jobId = bin2hex(random_bytes(8));\n    $nowMs = (int) round(microtime(true) * 1000);\n    $meta = [\n        'id' => $jobId,\n        'payload' => json_encode($payload),\n        'status' => 'pending',\n        'attempts' => '0',\n        'enqueued_at_ms' => (string) $nowMs,\n        'claim_token' => '',\n    ];\n\n    $pipe = $this->redis->pipeline();\n    $pipe->hset($this->metaKey($jobId), ...self::flattenFields($meta));\n    $pipe->lpush($this->pendingKey, [$jobId]);\n    $pipe->execute();\n\n    $this->redis->hincrby($this->statsKey, 'enqueued_total', 1);\n    return $jobId;\n}",
      "section_id": "enqueueing-jobs"
    },
    {
      "id": "claiming-jobs-with-brpoplpush-ex0",
      "language": "php",
      "code": "public function claim(int $timeoutMs = 1000): ?ClaimedJob\n{\n    $timeoutSec = max(1, (int) ceil($timeoutMs / 1000));\n    $jobId = $this->redis->brpoplpush($this->pendingKey, $this->processingKey, $timeoutSec);\n    if ($jobId === null || $jobId === false || $jobId === '') {\n        return null;\n    }\n\n    $token = bin2hex(random_bytes(8));\n    $nowMs = (int) round(microtime(true) * 1000);\n    $metaKey = $this->metaKey($jobId);\n\n    $pipe = $this->redis->pipeline();\n    $pipe->hset($metaKey, ...self::flattenFields([\n        'status' => 'processing',\n        'claimed_at_ms' => (string) $nowMs,\n        'claim_token' => $token,\n    ]));\n    $pipe->hincrby($metaKey, 'attempts', 1);\n    $pipe->hgetall($metaKey);\n    [$_h, $_a, $meta] = $pipe->execute();\n\n    return new ClaimedJob(\n        (string) $jobId,\n        json_decode($meta['payload'] ?? '{}', true) ?: [],\n        (int) ($meta['attempts'] ?? 1),\n        $token\n    );\n}",
      "section_id": "claiming-jobs-with-brpoplpush"
    },
    {
      "id": "completing-jobs-ex0",
      "language": "php",
      "code": "public function complete(ClaimedJob $job, array $result): bool\n{\n    $ok = $this->redis->eval(\n        self::COMPLETE_SCRIPT,\n        3,\n        $this->metaPrefix,\n        $this->processingKey,\n        $this->completedKey,\n        $job->id,\n        $job->claimToken,\n        'completed',\n        (string) self::nowMs(),\n        json_encode($result),\n        (string) $this->completedTtl,\n        (string) $this->completedHistory\n    );\n    if (!$ok) {\n        return false;\n    }\n    $this->redis->publish($this->eventsChannel,\n        json_encode(['id' => $job->id, 'status' => 'completed']));\n    $this->redis->hincrby($this->statsKey, 'completed_total', 1);\n    return true;\n}",
      "section_id": "completing-jobs"
    },
    {
      "id": "failing-and-retrying-ex0",
      "language": "php",
      "code": "public function fail(ClaimedJob $job, string $error): bool\n{\n    $retry = $job->attempts < $this->maxAttempts;\n    $result = $this->redis->eval(\n        self::FAIL_SCRIPT,\n        4,\n        $this->metaPrefix,\n        $this->processingKey,\n        $this->pendingKey,\n        $this->failedKey,\n        $job->id,\n        $job->claimToken,\n        $error,\n        (string) self::nowMs(),\n        (string) $this->completedTtl,\n        (string) $this->completedHistory,\n        $retry ? '1' : '0'\n    );\n    return (bool) $result;\n}",
      "section_id": "failing-and-retrying"
    },
    {
      "id": "reclaiming-stuck-jobs-ex0",
      "language": "php",
      "code": "public function reclaimStuck(): array\n{\n    $reclaimed = $this->redis->eval(\n        self::RECLAIM_SCRIPT,\n        3,\n        $this->pendingKey,\n        $this->processingKey,\n        $this->metaPrefix,\n        (string) self::nowMs(),\n        (string) $this->visibilityMs\n    );\n    return is_array($reclaimed) ? array_values(array_map('strval', $reclaimed)) : [];\n}",
      "section_id": "reclaiming-stuck-jobs"
    },
    {
      "id": "stats-and-history-ex0",
      "language": "php",
      "code": "public function stats(): array\n{\n    $pipe = $this->redis->pipeline();\n    $pipe->llen($this->pendingKey);\n    $pipe->llen($this->processingKey);\n    $pipe->llen($this->completedKey);\n    $pipe->llen($this->failedKey);\n    $pipe->hgetall($this->statsKey);\n    [$pending, $processing, $completed, $failed, $statsHash] = $pipe->execute();\n\n    return [\n        'enqueued_total'   => (int) ($statsHash['enqueued_total']   ?? 0),\n        'completed_total'  => (int) ($statsHash['completed_total']  ?? 0),\n        'failed_total'     => (int) ($statsHash['failed_total']     ?? 0),\n        'reclaimed_total'  => (int) ($statsHash['reclaimed_total']  ?? 0),\n        'pending_depth'    => (int) $pending,\n        'processing_depth' => (int) $processing,\n        'completed_depth'  => (int) $completed,\n        'failed_depth'     => (int) $failed,\n        'visibility_ms'    => $this->visibilityMs,\n    ];\n}",
      "section_id": "stats-and-history"
    },
    {
      "id": "prerequisites-ex0",
      "language": "bash",
      "code": "composer require \"predis/predis:^3.0\"",
      "section_id": "prerequisites"
    },
    {
      "id": "get-the-source-files-ex0",
      "language": "bash",
      "code": "mkdir job-queue-demo && cd job-queue-demo\nBASE=https://raw.githubusercontent.com/redis/docs/main/content/develop/use-cases/job-queue/php\ncurl -O $BASE/JobQueue.php\ncurl -O $BASE/JobWorker.php\ncurl -O $BASE/WorkerSupervisor.php\ncurl -O $BASE/demo_server.php\ncurl -O $BASE/worker.php\ncurl -O $BASE/composer.json",
      "section_id": "get-the-source-files"
    },
    {
      "id": "get-the-source-files-ex1",
      "language": "bash",
      "code": "composer install",
      "section_id": "get-the-source-files"
    },
    {
      "id": "start-the-demo-server-ex0",
      "language": "bash",
      "code": "php -S 127.0.0.1:8796 demo_server.php",
      "section_id": "start-the-demo-server"
    },
    {
      "id": "start-the-demo-server-ex1",
      "language": "text",
      "code": "[...] PHP 8.4.6 Development Server (http://127.0.0.1:8796) started",
      "section_id": "start-the-demo-server"
    },
    {
      "id": "start-the-demo-server-ex2",
      "language": "bash",
      "code": "REDIS_HOST=redis.local REDIS_PORT=6380 VISIBILITY_MS=10000 \\\n    php -S 127.0.0.1:8796 demo_server.php",
      "section_id": "start-the-demo-server"
    },
    {
      "id": "the-worker-process-and-supervisor-ex0",
      "language": "bash",
      "code": "php worker.php --name worker-1 --work-latency-ms 200 --fail-rate 0 --hang-rate 0",
      "section_id": "the-worker-process-and-supervisor"
    },
    {
      "id": "inspect-queue-state-directly-in-redis-ex0",
      "language": "bash",
      "code": "# How many pending jobs?\nredis-cli LLEN queue:jobs:pending\n\n# Look at the next 5 jobs to be picked up.\nredis-cli LRANGE queue:jobs:pending -5 -1\n\n# Read a job's metadata.\nredis-cli HGETALL queue:jobs:job:9a4f0d1c\n\n# How many jobs are currently being processed?\nredis-cli LLEN queue:jobs:processing\n\n# Read the demo counters (PHP-only — these live in Redis because each\n# HTTP request is its own process).\nredis-cli HGETALL demo:queue_stats:jobs\n\n# Clear everything for this queue (be careful — this deletes work).\nredis-cli --scan --pattern 'queue:jobs:*' | xargs redis-cli DEL",
      "section_id": "inspect-queue-state-directly-in-redis"
    }
  ]
}
