{
  "id": "develop-for-aa",
  "title": "Develop applications with Active-Active databases",
  "url": "https://redis.io/docs/latest/operate/rs/7.8/databases/active-active/develop/develop-for-aa/",
  "summary": "Overview of how developing applications differs for Active-Active databases from standalone Redis databases.",
  "content": "Developing geo-distributed, multi-master applications can be difficult.\nApplication developers may have to understand a large number of race\nconditions between updates to various sites, network, and cluster\nfailures that could reorder the events and change the outcome of the\nupdates performed across geo-distributed writes.\n\nActive-Active databases (formerly known as CRDB) are geo-distributed databases that span multiple Redis Enterprise Software (RS) clusters.\nActive-Active databases depend on multi-master replication (MMR) and Conflict-free\nReplicated Data Types (CRDTs) to power a simple development experience\nfor geo-distributed applications. Active-Active databases allow developers to use existing\nRedis data types and commands, but understand the developers intent and\nautomatically handle conflicting concurrent writes to the same key\nacross multiple geographies. For example, developers can simply use the\nINCR or INCRBY method in Redis in all instances of the geo-distributed\napplication, and Active-Active databases handle the additive nature of INCR to reflect the\ncorrect final value. The following example displays a sequence of events\nover time : t1 to t9. This Active-Active database has two member Active-Active databases : member CRDB1 and\nmember CRDB2. The local operations executing in each member Active-Active database is\nlisted under the member Active-Active database name. The \"Sync\" even represent the moment\nwhere synchronization catches up to distribute all local member Active-Active database\nupdates to other participating clusters and other member Active-Active databases.\n\n|  **Time** | **Member CRDB1** | **Member CRDB2** |\n|  :------: | :------: | :------: |\n|  t1 | INCRBY key1 7 |  |\n|  t2 |  | INCRBY key1 3 |\n|  t3 | GET key1\u003cbr/\u003e7 | GET key1\u003cbr/\u003e3 |\n|  t4 | — Sync — | — Sync — |\n|  t5 | GET key1\u003cbr/\u003e10 | GET key1\u003cbr/\u003e10 |\n|  t6 | DECRBY key1 3 |  |\n|  t7 |  | INCRBY key1 6 |\n|  t8 | — Sync — | — Sync — |\n|  t9 | GET key1\u003cbr/\u003e13 | GET key1\u003cbr/\u003e13 |\n\nDatabases provide various approaches to address some of these concerns:\n\n- Active-Passive Geo-distributed deployments: With active-passive\n    distributions, all writes go to an active cluster. Redis Enterprise\n    provides a \"Replica Of\" capability that provides a similar approach.\n    This can be employed when the workload is heavily balanced towards\n    read and few writes. However, WAN performance and availability\n    is quite flaky and traveling large distances for writes take away\n    from application performance and availability.\n- Two-phase Commit (2PC): This approach is designed around a protocol\n    that commits a transaction across multiple transaction managers.\n    Two-phase commit provides a consistent transactional write across\n    regions but fails transactions unless all participating transaction\n    managers are \"available\" at the time of the transaction. The number\n    of messages exchanged and its cross-regional availability\n    requirement make two-phase commit unsuitable for even moderate\n    throughputs and cross-geo writes that go over WANs.\n- Sync update with Quorum-based writes: This approach synchronously\n    coordinates a write across majority number of replicas across\n    clusters spanning multiple regions. However, just like two-phase\n    commit, number of messages exchanged and its cross-regional\n    availability requirement make geo-distributed quorum writes\n    unsuitable for moderate throughputs and cross geo writes that go\n    over WANs.\n- Last-Writer-Wins (LWW) Conflict Resolution: Some systems provide\n    simplistic conflict resolution for all types of writes where the\n    system clocks are used to determine the winner across conflicting\n    writes. LWW is lightweight and can be suitable for simpler data.\n    However, LWW can be destructive to updates that are not necessarily\n    conflicting. For example adding a new element to a set across two\n    geographies concurrently would result in only one of these new\n    elements appearing in the final result with LWW.\n- MVCC (multi-version concurrency control): MVCC systems maintain\n    multiple versions of data and may expose ways for applications to\n    resolve conflicts. Even though MVCC system can provide a flexible\n    way to resolve conflicting writes, it comes at a cost of great\n    complexity in the development of a solution.\n\nEven though types and commands in Active-Active databases look identical to standard Redis\ntypes and commands, the underlying types in RS are enhanced to maintain\nmore metadata to create the conflict-free data type experience. This\nsection explains what you need to know about developing with Active-Active databases on\nRedis Enterprise Software.\n\n## Lua scripts\n\nActive-Active databases support Lua scripts, but unlike standard Redis, Lua scripts always\nexecute in effects replication mode. There is currently no way to\nexecute them in script-replication mode.\n\n## Eviction\n\nThe default policy for Active-Active databases is _noeviction_ mode. Redis Enterprise version 6.0.20 and later support all eviction policies for Active-Active databases, unless [Auto Tiering]()(previously known as Redis on Flash) is enabled.\nFor details, see [eviction for Active-Active databases]().\n\n\n## Expiration\n\nExpiration is supported with special multi-master semantics.\n\nIf a key's expiration time is changed at the same time on different\nmembers of the Active-Active database, the longer extended time set via TTL on a key is\npreserved. As an example:\n\nIf this command was performed on key1 on cluster #1\n\n```sh\n127.0.0.1:6379\u003e EXPIRE key1 10\n```\n\nAnd if this command was performed on key1 on cluster #2\n\n```sh\n127.0.0.1:6379\u003e EXPIRE key1 50\n```\n\nThe EXPIRE command setting the key to 50 would win.\n\nAnd if this command was performed on key1 on cluster #3:\n\n```sh\n127.0.0.1:6379\u003e PERSIST key1\n```\n\nIt would win out of the three clusters hosting the Active-Active database as it sets the\nTTL on key1 to an infinite time.\n\nThe replica responsible for the \"winning\" expire value is also\nresponsible to expire the key and propagate a DEL effect when this\nhappens. A \"losing\" replica is from this point on not responsible\nfor expiring the key, unless another EXPIRE command resets the TTL.\nFurthermore, a replica that is NOT the \"owner\" of the expired value:\n\n- Silently ignores the key if a user attempts to access it in READ\n    mode, e.g. treating it as if it was expired but not propagating a\n    DEL.\n- Expires it (sending a DEL) before making any modifications if a user\n    attempts to access it in WRITE mode.\n    \n    \nExpiration values are in the range of [0,\u0026nbsp;2^49] for Active-Active databases and [0,\u0026nbsp;2^64] for non Active-Active databases.\n    \n\n## Out-of-Memory (OOM) {#outofmemory-oom}\n\nIf a member Active-Active database is in an out of memory situation, that member is marked\n\"inconsistent\" by RS, the member stops responding to user traffic, and\nthe syncer initiates full reconciliation with other peers in the Active-Active database.\n\n## Active-Active Database Key Counts\n\nKeys are counted differently for Active-Active databases:\n\n- DBSIZE (in `shard-cli dbsize`) reports key header instances\n    that represent multiple potential values of a key before a replication conflict is resolved.\n- expired_keys (in `bdb-cli info`) can be more than the keys count in DBSIZE (in `shard-cli dbsize`) \n    because expires are not always removed when a key becomes a tombstone.\n    A tombstone is a key that is logically deleted but still takes memory\n    until it is collected by the garbage collector.\n- The Expires average TTL (in `bdb-cli info`) is computed for local expires only.\n\n## INFO\n\nThe INFO command has an additional crdt section which provides advanced\ntroubleshooting information (applicable to support etc.):\n\n|  **Section** | **Field** | **Description** |\n|  ------ | ------ | ------ |\n|  **CRDT Context** | crdt_config_version | Currently active Active-Active database configuration version. |\n|   | crdt_slots | Hash slots assigned and reported by this shard. |\n|   | crdt_replid | Unique Replica/Shard IDs. |\n|   | crdt_clock | Clock value of local vector clock. |\n|   | crdt_ovc | Locally observed Active-Active database vector clock. |\n|  **Peers** | A list of currently connected Peer Replication peers. This is similar to the slaves list reported by Redis. |  |\n|  **Backlogs** | A list of Peer Replication backlogs currently maintained. Typically in a full mesh topology only a single backlog is used for all peers, as the requested Ids are identical. |  |\n|  **CRDT Stats** | crdt_sync_full | Number of inbound full synchronization processes performed. |\n|   | crdt_sync_partial_ok | Number of partial (backlog based) re-synchronization processes performed. |\n|   | crdt_sync_partial-err | Number of partial re-synchronization processes failed due to exhausted backlog. |\n|   | crdt_merge_reqs | Number of inbound merge requests processed. |\n|   | crdt_effect_reqs | Number of inbound effect requests processed. |\n|   | crdt_ovc_filtered_effect_reqs | Number of inbound effect requests filtered due to old vector clock. |\n|   | crdt_gc_pending | Number of elements pending garbage collection. |\n|   | crdt_gc_attempted | Number of attempts to garbage collect tombstones. |\n|   | crdt_gc_collected | Number of tombstones garbaged collected successfully. |\n|   | crdt_gc_gvc_min | The minimal globally observed vector clock, as computed locally from all received observed clocks. |\n|   | crdt_stale_released_with_merge | Indicates last stale flag transition was a result of a complete full sync. |\n|  **CRDT Replicas** | A list of crdt_replica \\\u003cuid\u003e entries, each describes the known state of a remote instance with the following fields: |  |\n|   | config_version | Last configuration version reported. |\n|   | shards | Number of shards. |\n|   | slots | Total number of hash slots. |\n|   | slot_coverage | A flag indicating remote shards provide full coverage (i.e. all shards are alive). |\n|   | max_ops_lag | Number of local operations not yet observed by the least updated remote shard |\n|   | min_ops_lag | Number of local operations not yet observed by the most updated remote shard |\n",
  "tags": ["docs","operate","rs","rc"],
  "last_updated": "2026-04-01T08:10:08-05:00"
}

