{
  "id": "node-selection",
  "title": "Control node selection",
  "url": "https://redis.io/docs/latest/operate/kubernetes/7.8.4/recommendations/node-selection/",
  "summary": "This section provides information about how Redis Enterprise cluster pods can be scheduled to only be placed on specific nodes or node pools.",
  "content": "\nMany Kubernetes cluster deployments have different kinds of nodes that have\ndifferent CPU and memory resources available for scheduling cluster workloads.\nRedis Enterprise for Kubernetes has various abilities to control the scheduling\nRedis Enterprise cluster node pods through properties specified in the\nRedis Enterprise cluster custom resource definition (CRD).\n\nA Redis Enterprise cluster (REC) is deployed as a StatefulSet which manages the Redis Enterprise cluster node pods.\nThe scheduler chooses a node to deploy a new Redis Enterprise cluster node pod on when:\n\n- The cluster is created\n- The cluster is resized\n- A pod fails\n\nHere are the ways that you can control the pod scheduling:\n\n## Using node selectors\n\nThe [`nodeSelector`]()\nproperty of the cluster specification uses the same values and structures as\nthe [Kubernetes `nodeSelector`](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector).\nIn general, node labels are a simple way to make sure that specific nodes are used for Redis Enterprise pods.\nFor example, if nodes 'n1' and 'n2' are labeled as \"high memory\":\n\n```sh\nkubectl label nodes n1 memory=high\nkubectl label nodes n2 memory=high\n```\n\nThe Redis Enterprise cluster CRD can request to be scheduled on these nodes:\n\n```yaml\napiVersion: app.redislabs.com/v1\nkind: RedisEnterpriseCluster\nmetadata:\n  name: rec\nspec:\n  nodes: 3\n  nodeSelector:\n     memory: high\n```\n\nThen, when the operator creates the StatefulSet associated with the pod, the nodeSelector\nsection is part of the pod specification. When the scheduler attempts to\ncreate new pods, it needs to satisfy the node selection constraints.\n\n\n## Using node pools\n\nA node pool is a common part of the underlying infrastructure of the Kubernetes cluster deployment and provider.\nOften, node pools are similarly-configured classes of nodes such as nodes with the same allocated amount of memory and CPU.\nImplementors often label these nodes with a consistent set of labels.\n\nOn Google Kubernetes Engine (GKE), all node pools have the label `cloud.google.com/gke-nodepool` with a value of the name used during configuration.\nOn Microsoft Azure Kubernetes System (AKS), you can create node pools with a specific set of labels. Other managed cluster services may have similar labeling schemes.\n\nYou can use the `nodeSelector` section to request a specific node pool by label values. For example, on GKE:\n\n```yaml\napiVersion: app.redislabs.com/v1\nkind: RedisEnterpriseCluster\nmetadata:\n  name: rec\nspec:\n  nodes: 3\n  nodeSelector:\n     cloud.google.com/gke-nodepool: 'high-memory'\n```\n\n## Using node taints\n\nYou can use multiple node taints with a set of tolerations to control Redis Enterprise cluster node pod scheduling.\nThe `podTolerations` property of the cluster specification specifies a list of pod tolerations to use.\nThe value is a list of [Kubernetes tolerations](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/#concepts).\n\nFor example, if the cluster has a single node pool, the node taints can control the allowed workloads for a node.\nYou can add taints to the node, for example nodes n1, n2, and n3, reserve a set of nodes for the Redis Enterprise cluster:\n\n```sh\nkubectl taint nodes n1 db=rec:NoSchedule\nkubectl taint nodes n2 db=rec:NoSchedule\nkubectl taint nodes n3 db=rec:NoSchedule\n```\n\nThis prevents any pods from being scheduled onto the nodes unless the pods can tolerate the taint `db=rec`.\n\nYou can then add the toleration for this taint to the cluster specification:\n\n```yaml\napiVersion: app.redislabs.com/v1\nkind: RedisEnterpriseCluster\nmetadata:\n  name: rec\nspec:\n  nodes: 3\n  podTolerations:\n  - key: db\n    operator: Equal     \n    value: rec\n    effect: NoSchedule\n```\n\nA set of taints can also handle more complex use cases.\nFor example, a `role=test` or `role=dev` taint can be used to designate a node as dedicated for testing or development workloads via pod tolerations.\n\n## Using pod anti-affinity\n\nBy default, the Redis Enterprise node pods are not allowed to be placed on the same node for the same cluster:\n\n```yaml\npodAntiAffinity:\n  requiredDuringSchedulingIgnoredDuringExecution:\n  - labelSelector:\n      matchLabels:\n        app: redis-enterprise\n        redis.io/cluster: rec\n        redis.io/role: node\n    topologyKey: kubernetes.io/hostname\n```\n\nEach pod has the three labels above where `redis.io/cluster` is the label for the name of your cluster.\n\nYou can change this rule to restrict or include nodes that the Redis Enterprise cluster node pods can run on.\nFor example, you can delete the `redis.io/cluster` label so that even Redis Enterprise node pods from different clusters cannot be scheduled on the same Kubernetes node:\n\n```yaml\napiVersion: app.redislabs.com/v1\nkind: RedisEnterpriseCluster\nmetadata:\n  name: rec\nspec:\n  nodes: 3\n  podAntiAffinity:\n    requiredDuringSchedulingIgnoredDuringExecution:\n    - labelSelector:\n        matchLabels:\n          app: redis-enterprise\n          redis.io/role: node\n      topologyKey: kubernetes.io/hostname\n```\n\nor you can prevent Redis Enterprise nodes from being schedule with other workloads.\nFor example, if all database workloads have the label 'local/role: database', you\ncan use this label to avoid scheduling two databases on the same node:\n\n```yaml\napiVersion: app.redislabs.com/v1\nkind: RedisEnterpriseCluster\nmetadata:\n  name: rec\nspec:\n  nodes: 3\n  extraLabels:\n     local/role: database\n  podAntiAffinity:\n    requiredDuringSchedulingIgnoredDuringExecution:\n    - labelSelector:\n        matchLabels:\n          local/role: database\n          app: redis-enterprise\n          redis.io/cluster: rec\n          redis.io/role: node\n      topologyKey: kubernetes.io/hostname\n```\n\nIn this case, any pods that are deployed with the label `local/role: database` cannot be scheduled on the same node.\n\n\n## Using rack awareness\n\nYou can configure Redis Enterprise with rack-zone awareness to increase availability\nduring partitions or other rack (or region) related failures.\n\nWhen creating your rack-zone ID, there are some constraints to consider; see [rack-zone awareness]() for more info. \n\n\nRack-zone awareness is a single property in the Redis Enterprise cluster CRD named `rackAwarenessNodeLabel`.\nThis value for this label is commonly `topology.kubernetes.io/zone` as documented in\n['Running in multiple zones'](https://kubernetes.io/docs/setup/best-practices/multiple-zones/#nodes-are-labeled).\n\nYou can check the value for this label in your nodes with the command:\n\n```sh\n$kubectl get nodes -o custom-columns=\"name:metadata.name\",\"rack\\\\zone:metadata.labels.failure-domain\\.beta\\.kubernetes\\.io/zone\"\n\nname                                            rack\\zone\nip-10-0-x-a.eu-central-1.compute.internal    eu-central-1a\nip-10-0-x-b.eu-central-1.compute.internal    eu-central-1a\nip-10-0-x-c.eu-central-1.compute.internal    eu-central-1b\nip-10-0-x-d.eu-central-1.compute.internal    eu-central-1b\n```\n\n### Enabling the cluster role\n\nFor the operator to read the cluster node information, you must create a cluster role for the operator and then bind the role to the service account.\n\nHere's a cluster role:\n\n```yaml\nkind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n  name: redis-enterprise-operator\nrules:\n  # needed for rack awareness\n  - apiGroups: [\"\"]\n    resources: [\"nodes\"]\n    verbs: [\"list\", \"get\", \"watch\"]\n```\n\nAnd here's how to apply the role:\n\n```sh\nkubectl apply -f https://raw.githubusercontent.com/RedisLabs/redis-enterprise-k8s-docs/master/rack_awareness/rack_aware_cluster_role.yaml\n```\n\nThe binding is typically to the `redis-enterprise-operator` service account:\n\n```yaml\nkind: ClusterRoleBinding\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n  name: redis-enterprise-operator\nsubjects:\n- kind: ServiceAccount\n  namespace: OPERATOR_NAMESPACE\n  name: redis-enterprise-operator\nroleRef:\n  kind: ClusterRole\n  name: redis-enterprise-operator\n  apiGroup: rbac.authorization.k8s.io\n```\n\nand it can be applied by running:\n\n```sh\nkubectl apply -f https://raw.githubusercontent.com/RedisLabs/redis-enterprise-k8s-docs/master/rack_awareness/rack_aware_cluster_role_binding.yaml\n```\n\nOnce the cluster role and the binding have been applied, you can configure Redis Enterprise clusters to use rack awareness labels.\n\n### Configuring rack awareness\n\nYou can configure the node label to read for the rack zone by setting the `rackAwarenessNodeLabel` property:\n\n```yaml\napiVersion: app.redislabs.com/v1\nkind: RedisEnterpriseCluster\nmetadata:\n  name: example-redisenterprisecluster\nspec:\n  nodes: 3\n  rackAwarenessNodeLabel: topology.kubernetes.io/zone\n```\n\n\nWhen you use the `rackAwarenessNodeLabel` property, the operator will change the topologyKey for the anti-affinity rule to the label name used unless you have specified the `podAntiAffinity` property as well. If you use `rackAwarenessNodeLabel` and `podAntiAffinity` together, you must make sure that the `topologyKey` in your pod anti-affinity rule is set to the node label name.\n\n",
  "tags": ["docs","operate","kubernetes"],
  "last_updated": "2026-04-08T12:21:52-07:00"
}

