0

We are using ArgoCD ApplicationSet to deploy multiple applications. The project structure looks like this:

├── clusters
│ ├── first
│ │ ├── Chart.yaml
│ │ ├── requirements.yaml
│ │ └── values.yaml
│ └── second
│ ├── Chart.yaml
│ ├── requirements.yaml
│ └── values.yaml
├── clusters.yaml
└── README.md

ApplicationSet definition (clusters.yaml)

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
 name: capi-clusters
 namespace: argocd
spec:
 goTemplate: true
 goTemplateOptions: ["missingkey=error"]
 generators:
 - git:
 repoURL: http://gitlab.farbeen.local/h.alipour/sre.git
 revision: HEAD
 directories:
 - path: clusters/*
 template:
 metadata:
 name: '{{.path.basename}}'
 spec:
 project: default
 source:
 repoURL: http://gitlab.farbeen.local/h.alipour/sre.git
 path: '{{.path.path}}'
 targetRevision: HEAD
 destination:
 server: https://kubernetes.default.svc
 namespace: default
 syncPolicy:
 automated:
 prune: true
 selfHeal: true

Notes on the setup

  • Each application (first, second) is represented as a Helm chart.
  • The only difference between them is their respective values.yaml.
  • For completeness, here are the Helm chart definitions:

Chart.yaml

apiVersion: v2
name: first
version: 1.0.0
type: application
appVersion: "1.0"

requirements.yaml

dependencies:
 - name: capi-vsphere
 version: 0.1.0
 repository: http://repo.farbeen.local/repository/helm-local/

The problem

When ArgoCD deploys the applications, it ignores the local values.yaml files inside first and second. Instead, it only applies the default values bundled in the dependency chart (capi-vsphere) from the remote Helm repository.

We expect ArgoCD to use the custom values.yaml provided for each application, but this is not happening.

Also we used this as reference

asked Oct 5, 2025 at 6:35

1 Answer 1

0

I found the problem!
It was a silly mistake

You have to override values in the first/values.yaml or second/values.yaml like the following

capi-vsphere: <--- I missed this !
 cluster:
 kubernetesVersion: v1.32.4
 clusterNamespace: default
 clusterName: "lake-cluster"
 ccm: registry.k8s.io/cloud-pv-vsphere/cloud-provider-vsphere:v1.32.4
 kubevip:
 address: 172.16.10.144
 image: ghcr.io/kube-vip/kube-vip:v0.6.4
 imagePullPolicy: IfNotPresent
answered Oct 5, 2025 at 8:20
Sign up to request clarification or add additional context in comments.

Comments

Your Answer

Draft saved
Draft discarded

Sign up or log in

Sign up using Google
Sign up using Email and Password

Post as a guest

Required, but never shown

Post as a guest

Required, but never shown

By clicking "Post Your Answer", you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.