Support for 6.0.20-69, OpenShift 4.7, and K8s 1.20. Hashicorp Vault integration with REC and REDB secrets.
| Redis Enterprise for Kubernetes |
|---|
The Redis Enterprise K8s 6.0.20-4 release is a major release on top of 6.0.8-20 providing support for the Redis Enterprise Software release 6.0.20-69 and includes several enhancements and bug fixes.
This release of the operator provides:
To upgrade your deployment to this latest release, see "Upgrade a Redis Enterprise cluster (REC) on Kubernetes".
This release includes the following container images:
6.0.20-4 does not appear in OLM. Workaround - deploy manually. A future maintenance release will address this.
There is no workaround at this time
In some cases where the Redis Enterprise Cluster container in the Redis Enterprise Cluster(REC) pod is restarted, the REC node remains down. Workaround: restart the pod, while ensuring the majority of REC nodes are available.
When a pod's status is CrashLoopBackOff and we run the cluster recovery, the process will not complete. The workaround is to delete the crashing pods manually. The recovery process will then continue.
A cluster name longer than 20 characters will result in a rejected route configuration because the host part of the domain name will exceed 63 characters. The workaround is to limit cluster name to 20 characters or fewer.
A cluster CR specification error is not reported if two or more invalid CR resources are updated in sequence.
When a cluster is in an unreachable state, the state is still running instead of being reported as an error.
STS Readiness probe does not mark a node as "not ready" when running rladmin status on the node fails.
The redis-enterprise-operator role is missing permission on replica sets.
Openshift 3.11 does not support DockerHub private registries. This is a known OpenShift issue.
DNS conflicts are possible between the cluster mdns_server and the K8s DNS. This only impacts DNS resolution from within cluster nodes for Kubernetes DNS names.
Kubernetes-based 5.4.10 deployments seem to negatively impact existing 5.4.6 deployments that share a Kubernetes cluster.
In Kubernetes, the node CPU usage we report on is the usage of the Kubernetes worker node hosting the REC pod.
In OLM-deployed operators, the deployment of the cluster will fail if the name is not "rec". When the operator is deployed via the OLM, the security context constraints (scc) is bound to a specific service account name (i.e., "rec"). The workaround is to name the cluster "rec".
The master pod is not always labeled in Rancher.
When REC clusters are deployed on Kubernetes clusters with unsynchronized clocks, the REC cluster does not start correctly. The fix is to use NTP to synchronize the underlying K8s nodes.
When a REC cluster is deployed in a project (namespace) and has REDB resources, the REDB resources must be deleted first before the REC can be deleted. As such, until the REDB resources are deleted, the project deletion will hang. The fix is to delete the REDB resources first and the REC second. Afterwards, you may delete the project.
In K8s 1.15 or older, the PVC labels come from the match selectors and not the PVC templates. As such, these versions can not support PVC labels. If this feature is required, the only fix is to upgrade the K8s cluster to a newer version.