I've created deployment template using helm (v3.14.3) with support for setting initContainers. Last time I realized one of initContainers removed from values.yaml is still present in cluster. I tried various fixes, but I'm not able to force helm to remove it.
The way how I deploy chart is:
helm template site-wordpress ./web-chart \
-f ./values-prod.yaml \
--set image.tag=prod-61bdfc674d25c376f753849555ab74ce0b01a0dea617a185f8a7a5e33689445e
Can someone advise me on the issue? Here's the template:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "app.fullname" . }}
labels:
{{- include "app.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
{{- with .Values.strategy }}
strategy:
{{- toYaml . | nindent 4 }}
{{- end }}
selector:
matchLabels:
{{- include "app.selectorLabels" . | nindent 6 }}
app.kubernetes.io/component: app
template:
metadata:
annotations:
# This will change whenever initContainers config changes (including when removed)
checksum/initcontainers: {{ .Values.initContainers | default list | toYaml | sha256sum }}
{{- with .Values.podAnnotations }}
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "app.labels" . | nindent 8 }}
{{- with .Values.podLabels }}
{{- toYaml . | nindent 8 }}
{{- end }}
app.kubernetes.io/component: app
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "app.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
{{- if .Values.initContainers }}
initContainers:
{{- range .Values.initContainers }}
- name: {{ .name }}
image: "{{ $.Values.image.repository }}:{{ $.Values.image.tag | default $.Chart.AppVersion }}"
imagePullPolicy: {{ $.Values.image.pullPolicy }}
{{- if .command }}
command: {{ toJson .command }}
{{- end }}
{{- if .args }}
args: {{ toJson .args }}
{{- end }}
{{- end }}
{{- end }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
{{- if .Values.command }}
command: {{ toJson .Values.command }}
{{- end }}
ports:
- name: http
containerPort: {{ .Values.service.port }}
protocol: TCP
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- end }}
{{- end }}
1 Answer 1
As others commented, the way this chart works is too generic and might not be the best idea to allow any initContainer from the values.
That being said, the error you are having might be due to the new deployment not working and so the POD not getting really replace, which would explain why you still see the initContainer. Can you confirm that when you deploy the hel mchart without any init container it does replace the latest deployment and POD? you can see the kubectl events to verify there isn't any error and also describe the deployment.
kubectl get events --sort-by=.metadata.creationTimestamp
Replace <deployment_name> below:
kubectl describe deployment <deployment_name>
Also confirm that the initContaienr you see is not coming from other configurations (for example some tools as Istio inject initContainers to every POD)
If this doesn't work please share the values.yaml and the deployment describe yaml.
Comments
Explore related questions
See similar questions with these tags.
initContainers:fragment, except with many of the options missing. Can your chart take a stronger opinion on what images it might want to run (if any) asinitContainers:?