Redis Enterprise for Kubernetes Release Notes 6.2.10-34 (May 2022)
Support for REBD specific upgrade policies, memcached type REDBs, and RHEL8 for OpenShift, as well as feature improvements and bug fixes.
Overview
The Redis Enterprise K8s 6.2.10-34 supports the Redis Enterprise Software release 6.2.10 and includes feature improvements and bug fixes.
The key new features, bug fixes, and known limitations are described below.
Do not upgrade to this 6.2.10-34 release if you are an OpenShift customer and also use modules. Instead, use the 6.2.10-45 release.
There was a change in 6.2.10-34 to a new RHEL 8 base image for the Redis Server image. Due to binary differences in modules between the two operating systems, you cannot directly update RHEL 7 clusters to RHEL 8 when those clusters host databases using modules.
Please contact support if you have further questions.
Images
This release includes the following container images:
- Redis Enterprise:
redislabs/redis:6.2.10-107
orredislabs/redis:6.2.10-107.rhel8-openshift
- Operator:
redislabs/operator:6.2.10-34
- Services Rigger:
redislabs/k8s-controller:6.2.10-34
orredislabs/services-manager:6.2.10-34
(on the Red Hat registry)
New features
- Support database upgrade policy (major/latest) for REDB resources (RED-71028)
- Support for memcached type databases for REDB (RED-70284)(RED-75269)
- Use RHEL8 base images for OpenShift deployments (RED-72374)
Feature improvements
- OpenShift 4.10 support (RED-73966)
- Allow setting host time zone on running containers (RED-56810)
- AKS 1.23 support (RED-73965)
- EKS 1.22 support (RED-73972)
Fixed bugs
- Outdated SCC YAML file (RED-72026) (RED-73341)
- Admission container startup failure (RED-72081)
- Admission container restarts due to race condition with config map creation (RED-72268)
- Incorrect REDB status report during cluster recovery (RED-72944)
- Invalid REDB spec not always rejected by admission controller (RED-73145)
Compatibility notes
Below is a table showing supported distributions at the time of this release. See Supported Kubernetes distributions for the current list of supported distributions.
Kubernetes version | 1.19 | 1.20 | 1.21 | 1.22 | 1.23 |
---|---|---|---|---|---|
Community Kubernetes | deprecated | deprecated | supported | supported | supported* |
Amazon EKS | supported | supported | supported | ||
Azure AKS | supported | supported | supported* | ||
Google GKE | supported | supported | supported | supported | |
Rancher 2.6 | supported | supported | supported | supported | |
OpenShift version | 4.6 | 4.7 | 4.8 | 4.9 | 4.10 |
deprecated | deprecated | supported | supported | supported* | |
VMware TKGI version | 1.10 | 1.11 | 1.12 | 1.13 | |
supported | supported | supported* |
* Support added in most recent release
Now supported
- OpenShift 4.10 is now supported
- kOps (Community Kubernetes) 1.23 is now supported
- AKS 1.23 is now supported
- EKS 1.22 is now supported
Deprecated
- OpenShift 4.6-4.7 is deprecated
- kOps (Community Kubernetes) 1.18-1.20 are deprecated
- GKE 1.19 is deprecated
- Rancher 2.6 - K8s 1.18 is deprecated
- AKS 1.20-1.21 are deprecated
- EKS 1.18-1.19 are deprecated
No longer supported
- Rancher version 2.5 (previously deprecated) is no longer supported (not supported by SUSE)
- OpenShift version 3.11 (previously deprecated) is no longer supported.
Known limitations
Large clusters
On clusters with more than 9 REC nodes, a Kubernetes upgrade can render the Redis cluster unresponsive in some cases. A fix is available in the 6.4.2-5 release. Upgrade your operator version to 6.4.2-5 or later before upgrading your Kubernetes cluster. (RED-93025)
-
Warning Openshift customers using modules cannot use 6.2.10-34 version
Warning:Do not upgrade to this 6.2.10-34 release if you are an OpenShift customer and also use modules.
There was a change in 6.2.10-34 to a new RHEL 8 base image for the Redis Server image. Due to binary differences in modules between the two operating systems, you cannot directly update RHEL 7 clusters to RHEL 8 when those clusters host databases using modules.
This message will be updated as remediation plans and new steps are available to address this situation. Please contact support if you have further questions.
-
Long cluster names cause routes to be rejected (RED-25871)
A cluster name longer than 20 characters will result in a rejected route configuration because the host part of the domain name will exceed 63 characters. The workaround is to limit cluster name to 20 characters or less.
-
Cluster CR (REC) errors are not reported after invalid updates (RED-25542)
A cluster CR specification error is not reported if two or more invalid CR resources are updated in sequence.
-
An unreachable cluster has status running (RED-32805)
When a cluster is in an unreachable state, the state is still
running
instead of being reported as an error. -
Readiness probe incorrect on failures (RED-39300)
STS Readiness probe does not mark a node as "not ready" when running
rladmin status
on node failure. -
Role missing on replica sets (RED-39002)
The
redis-enterprise-operator
role is missing permission on replica sets. -
Internal DNS and Kubernetes DNS may have conflicts (RED-37462)
DNS conflicts are possible between the cluster
mdns_server
and the K8s DNS. This only impacts DNS resolution from within cluster nodes for Kubernetes DNS names. -
5.4.10 negatively impacts 5.4.6 (RED-37233)
Kubernetes-based 5.4.10 deployments seem to negatively impact existing 5.4.6 deployments that share a Kubernetes cluster.
-
Node CPU usage is reported instead of pod CPU usage (RED-36884)
In Kubernetes, the node CPU usage we report on is the usage of the Kubernetes worker node hosting the REC pod.
-
Clusters must be named "rec" in OLM-based deployments (RED-39825)
In OLM-deployed operators, the deployment of the cluster will fail if the name is not "rec". When the operator is deployed via the OLM, the security context constraints (scc) are bound to a specific service account name (i.e., "rec"). The workaround is to name the cluster "rec".
-
REC clusters fail to start on Kubernetes clusters with unsynchronized clocks (RED-47254)
When REC clusters are deployed on Kubernetes clusters with unsynchronized clocks, the REC cluster does not start correctly. The fix is to use NTP to synchronize the underlying K8s nodes.
-
Deleting an OpenShift project with an REC deployed may hang (RED-47192)
When a REC cluster is deployed in a project (namespace) and has REDB resources, the REDB resources must be deleted first before the REC can be deleted. As such, until the REDB resources are deleted, the project deletion will hang. The fix is to delete the REDB resources first and the REC second. Afterwards, you may delete the project.
-
REC extraLabels are not applied to PVCs on K8s versions 1.15 or older (RED-51921)
In K8s 1.15 or older, the PVC labels come from the match selectors and not the PVC templates. As such, these versions cannot support PVC labels. If this feature is required, the only fix is to upgrade the K8s cluster to a newer version.
-
Hashicorp Vault integration - no support for Gesher (RED-55080)
There is no workaround at this time.
-
REC might report error states on initial startup (RED-61707)
There is no workaround at this time except to ignore the errors.
-
PVC size issues when using decimal value in spec (RED-62132)
The workaround for this issue is to make sure you use integer values for the PVC size.
-
Following old revision of quick start guide causes issues creating an REDB due to unrecognized memory field name (RED-69515)
The workaround is to use the newer (current) revision of the quick start document available online.
-
autoUpgrade
set to true by operator might cause unexpected bdb upgrades whenredisUpgradePolicy
is set to true (RED-72351)Contact support if your deployment is impacted.
-
Procedure to update credentials might be problematic on OpenShift when accessing the cluster externally using routes (RS issue)(RED-73251)(RED-75329) To workaround this, access the API from within the K8s cluster.