Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Ensure ingress-dns pod is scheduled on the primary node #21132

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
fbyrne wants to merge 1 commit into kubernetes:master
base: master
Choose a base branch
Loading
from fbyrne:bugfix-17648

Conversation

@fbyrne
Copy link
Contributor

@fbyrne fbyrne commented Jul 23, 2025

Re open of PR #17649

fixes #17648

Add nodeSelector for primary minikube node to ingress-dns pod template.
Add toleration for kubernetes master role.

daniel-ciaglia reacted with thumbs up emoji
@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Jul 23, 2025
@k8s-ci-robot k8s-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Jul 23, 2025
Copy link
Contributor

Hi @fbyrne. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. label Jul 23, 2025
Copy link
Collaborator

Can one of the admins verify this patch?

Copy link
Member

medyagh commented Jul 24, 2025

thank you @fbyrne sorry that couldnt get your other PR in my attention, do you mind explaining how does this fix it ?
can you put Before/After this PR ?

Copy link
Contributor Author

fbyrne commented Jul 25, 2025

Hi @medyagh , the fix does two things

  1. Specify that the pod is scheduled on the minikube primary using the nodeSelector minikube.k8s.io/primary: 'true'
  2. Specify that the pod can tolerate noschedule on the primary/master node. This is the same as is done for the ingress addon here and here.

Ill post the before/after now.

Copy link
Contributor Author

fbyrne commented Jul 25, 2025
edited
Loading

Notice before the fix, the ingress dns pod can be launched on a random node.
in the example below:

Node: before-bugfix-17648-m02/192.168.49.3
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s

Notice after the fix, the ingress dns pod is launched on the primary/master node.

Node: after-bugfix-17648/192.168.49.2
Node-Selectors: minikube.k8s.io/primary=true
Tolerations: node-role.kubernetes.io/master:NoSchedule
 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s

Full before output

minikube on  HEAD (7498245) via  v1.24.0 on ☁️ 
at 13:26:01 ❯ git log --oneline
7498245a9 (HEAD, upstream/master, upstream/HEAD, origin/master, origin/HEAD, master) replace spinner lib to upstream (#21115)
2629cf7b6 Build(deps): Bump k8s.io/component-base from 0.33.2 to 0.33.3 (#21114)
320095eee Update auto-generated docs and translations (#21121)
70fc56d67 Build(deps): Bump actions/setup-go from 5.2.0 to 5.5.0 (#21116)
9e4cd659c Update auto-generated docs and translations (#21120)
d13f7bbe6 Build(deps): Bump peter-evans/create-pull-request from 7.0.6 to 7.0.8 (#21118)
958ecac9d Refactor table rendering (#20893)
d850b6921 bump winget gihtub action (#21119)
minikube on  HEAD (7498245) [!] via  v1.24.0 on ☁️ 
at 13:27:38 ❯ make clean
rm -rf /home/fergus/src/github/fbyrne/minikube/out
rm -f pkg/minikube/assets/assets.go
rm -f pkg/minikube/translate/translations.go
rm -rf ./vendor
rm -rf /tmp/tmp.*.minikube_*
minikube on  HEAD (7498245) [!] via  v1.24.0 on ☁️ 
at 13:27:54 ❯ make
go build -tags "" -ldflags="-X k8s.io/minikube/pkg/version.version=v1.36.0 -X k8s.io/minikube/pkg/version.isoVersion=v1.36.0-1752940814-21089 -X k8s.io/minikube/pkg/version.gitCommitID="7498245a96aa3e9be815f5f676aad34e9171f6f7-dirty" -X k8s.io/minikube/pkg/version.storageProvisionerVersion=v5" -o out/minikube k8s.io/minikube/cmd/minikube
minikube on  HEAD (7498245) [!] via  v1.24.0 on ☁️ 
at 13:28:10 ❯ out/minikube delete --purge --all
🔥 Deleting "minikube" in docker ...
🔥 Removing /home/fergus/.minikube/machines/minikube ...
💀 Removed all traces of the "minikube" cluster.
🔥 Successfully deleted all profiles
💀 Successfully purged minikube directory located at - [/home/fergus/.minikube]
minikube on  HEAD (7498245) [!] via  v1.24.0 on ☁️ 
at 13:28:18 ❯ out/minikube -p before-bugfix-17648 -n 3 start
😄 [before-bugfix-17648] minikube v1.36.0 on Ubuntu 24.04
✨ Automatically selected the docker driver. Other choices: kvm2, ssh
📌 Using Docker driver with root privileges
👍 Starting "before-bugfix-17648" primary control-plane node in "before-bugfix-17648" cluster
🚜 Pulling base image v0.0.47-1752142599-21053 ...
💾 Downloading Kubernetes v1.33.2 preload ...
 > preloaded-images-k8s-v18-v1...: 348.11 MiB / 348.11 MiB 100.00% 24.14 M
 > gcr.io/k8s-minikube/kicbase...: 485.70 MiB / 485.70 MiB 100.00% 12.98 M
🔥 Creating docker container (CPUs=2, Memory=5166MB) ...
🐳 Preparing Kubernetes v1.33.2 on Docker 28.3.2 ...
 ▪ Generating certificates and keys ...
 ▪ Booting up control plane ...
 ▪ Configuring RBAC rules ...
🔗 Configuring CNI (Container Networking Interface) ...
🔎 Verifying Kubernetes components...
 ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟 Enabled addons: storage-provisioner, default-storageclass
👍 Starting "before-bugfix-17648-m02" worker node in "before-bugfix-17648" cluster
🚜 Pulling base image v0.0.47-1752142599-21053 ...
🔥 Creating docker container (CPUs=2, Memory=5166MB) ...
🌐 Found network options:
 ▪ NO_PROXY=192.168.49.2
🐳 Preparing Kubernetes v1.33.2 on Docker 28.3.2 ...
 ▪ env NO_PROXY=192.168.49.2
🔎 Verifying Kubernetes components...
👍 Starting "before-bugfix-17648-m03" worker node in "before-bugfix-17648" cluster
🚜 Pulling base image v0.0.47-1752142599-21053 ...
🔥 Creating docker container (CPUs=2, Memory=5166MB) ...
🌐 Found network options:
 ▪ NO_PROXY=192.168.49.2,192.168.49.3
🐳 Preparing Kubernetes v1.33.2 on Docker 28.3.2 ...
 ▪ env NO_PROXY=192.168.49.2
 ▪ env NO_PROXY=192.168.49.2,192.168.49.3
🔎 Verifying Kubernetes components...
🏄 Done! kubectl is now configured to use "before-bugfix-17648" cluster and "default" namespace by default
minikube on  HEAD (7498245) [!] via  v1.24.0 on ☁️ took 1m27s 
at 13:30:03 ❯ out/minikube -p before-bugfix-17648 addons enable ingress
💡 ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
 ▪ Using image registry.k8s.io/ingress-nginx/controller:v1.12.3
 ▪ Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.5.4
 ▪ Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.5.4
🔎 Verifying ingress addon...
🌟 The 'ingress' addon is enabled
minikube on  HEAD (7498245) [!] via  v1.24.0 on ☁️ took 9s 
at 13:30:32 ❯ out/minikube -p before-bugfix-17648 addons enable ingress-dns
💡 ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
 ▪ Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
🌟 The 'ingress-dns' addon is enabled
minikube on  HEAD (7498245) [!] via  v1.24.0 on ☁️ 
at 13:30:49 ❯ out/minikube -p before-bugfix-17648 kubectl -- describe pod kube-ingress-dns-minikube --namespace=kube-system
 > kubectl.sha256: 64 B / 64 B [-------------------------] 100.00% ? p/s 0s
 > kubectl: 57.34 MiB / 57.34 MiB [------------] 100.00% 38.74 MiB p/s 1.7s
Name: kube-ingress-dns-minikube
Namespace: kube-system
Priority: 0
Service Account: minikube-ingress-dns
Node: before-bugfix-17648-m02/192.168.49.3
Start Time: 2025年7月25日 13:30:49 +0100
Labels: app=minikube-ingress-dns
 app.kubernetes.io/part-of=kube-system
Annotations: <none>
Status: Running
IP: 192.168.49.3
IPs:
 IP: 192.168.49.3
Containers:
 minikube-ingress-dns:
 Container ID: docker://f95f54399187e940d012df5384d8a125f36a0de8d4c9b1cf9bcafaf71e111fb4
 Image: gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c
 Image ID: docker-pullable://gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c
 Port: 53/UDP
 Host Port: 53/UDP
 State: Running
 Started: 2025年7月25日 13:30:55 +0100
 Ready: True
 Restart Count: 0
 Environment:
 DNS_PORT: 53
 POD_IP: (v1:status.podIP)
 Mounts:
 /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kb447 (ro)
Conditions:
 Type Status
 PodReadyToStartContainers True 
 Initialized True 
 Ready True 
 ContainersReady True 
 PodScheduled True 
Volumes:
 kube-api-access-kb447:
 Type: Projected (a volume that contains injected data from multiple sources)
 TokenExpirationSeconds: 3607
 ConfigMapName: kube-root-ca.crt
 Optional: false
 DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
 Type Reason Age From Message
 ---- ------ ---- ---- -------
 Normal Scheduled 23s default-scheduler Successfully assigned kube-system/kube-ingress-dns-minikube to before-bugfix-17648-m02
 Normal Pulling 22s kubelet Pulling image "gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c"
 Normal Pulled 17s kubelet Successfully pulled image "gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c" in 5.586s (5.586s including waiting). Image size: 190556102 bytes.
 Normal Created 17s kubelet Created container: minikube-ingress-dns
 Normal Started 17s kubelet Started container minikube-ingress-dns
minikube on  HEAD (7498245) [!] via  v1.24.0 on ☁️ took 2s 
at 13:31:12 ❯ kubectl apply -f https://raw.githubusercontent.com/kubernetes/minikube/master/deploy/addons/ingress-dns/example/example.yaml
deployment.apps/hello-world-app created
ingress.networking.k8s.io/example-ingress created
service/hello-world-app created
service/hello-world-app created
minikube on  HEAD (7498245) [!] via  v1.24.0 on ☁️ 
at 13:31:56 ❯ out/minikube -p before-bugfix-17648 kubectl -- get po
NAME READY STATUS RESTARTS AGE
hello-world-app-7d9564db4-f6ffv 1/1 Running 0 19s
minikube on  HEAD (7498245) [!] via  v1.24.0 on ☁️ 
at 13:32:15 ❯ out/minikube -p before-bugfix-17648 kubectl -- get ing --all-namespaces
NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
kube-system example-ingress nginx hello-john.test,hello-jane.test 80 35s
minikube on  HEAD (7498245) [!] via  v1.24.0 on ☁️ 
at 13:32:31 ❯ nslookup hello-john.test $(out/minikube -p before-bugfix-17648 ip)
;; communications error to 192.168.49.2#53: connection refused
;; communications error to 192.168.49.2#53: connection refused
;; communications error to 192.168.49.2#53: connection refused
;; no servers could be reached
minikube on  HEAD (7498245) [!] via  v1.24.0 on ☁️ 
at 13:32:51 ❯ nslookup hello-jane.test $(out/minikube -p before-bugfix-17648 ip)
;; communications error to 192.168.49.2#53: connection refused
;; communications error to 192.168.49.2#53: connection refused
;; communications error to 192.168.49.2#53: connection refused
;; no servers could be reached

Full after output

minikube on  bugfix-17648 [!] via  v1.24.0 on ☁️ 
at 13:36:37 ❯ git log --oneline -n 5
58bc2bd16 (HEAD -> bugfix-17648, origin/bugfix-17648) 17648 Fix to schedule the ingress-dns pod on the minikube primary node in multinode clusters
7498245a9 (upstream/master, upstream/HEAD, origin/master, origin/HEAD, master) replace spinner lib to upstream (#21115)
2629cf7b6 Build(deps): Bump k8s.io/component-base from 0.33.2 to 0.33.3 (#21114)
320095eee Update auto-generated docs and translations (#21121)
70fc56d67 Build(deps): Bump actions/setup-go from 5.2.0 to 5.5.0 (#21116)
minikube on  bugfix-17648 [!] via  v1.24.0 on ☁️ 
at 13:36:40 ❯ make clean
rm -rf /home/fergus/src/github/fbyrne/minikube/out
rm -f pkg/minikube/assets/assets.go
rm -f pkg/minikube/translate/translations.go
rm -rf ./vendor
rm -rf /tmp/tmp.*.minikube_*
minikube on  bugfix-17648 [!] via  v1.24.0 on ☁️ 
at 13:36:46 ❯ make
go build -tags "" -ldflags="-X k8s.io/minikube/pkg/version.version=v1.36.0 -X k8s.io/minikube/pkg/version.isoVersion=v1.36.0-1752940814-21089 -X k8s.io/minikube/pkg/version.gitCommitID="58bc2bd16d03f6a9f0bea0abc55166132e65bd2e-dirty" -X k8s.io/minikube/pkg/version.storageProvisionerVersion=v5" -o out/minikube k8s.io/minikube/cmd/minikube
minikube on  bugfix-17648 [!] via  v1.24.0 on ☁️ took 3s 
at 13:36:51 ❯ out/minikube delete --purge --all
🔥 Deleting "before-bugfix-17648" in docker ...
🔥 Removing /home/fergus/.minikube/machines/before-bugfix-17648 ...
🔥 Removing /home/fergus/.minikube/machines/before-bugfix-17648-m02 ...
🔥 Removing /home/fergus/.minikube/machines/before-bugfix-17648-m03 ...
💀 Removed all traces of the "before-bugfix-17648" cluster.
🔥 Successfully deleted all profiles
💀 Successfully purged minikube directory located at - [/home/fergus/.minikube]
📌 Kicbase images have not been deleted. To delete images run:
 ▪ docker rmi gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1752142599-21053
minikube on  bugfix-17648 [!] via  v1.24.0 on ☁️ took 6s 
at 13:37:27 ❯ out/minikube -p after-bugfix-17648 -n 3 start
😄 [after-bugfix-17648] minikube v1.36.0 on Ubuntu 24.04
✨ Automatically selected the docker driver. Other choices: kvm2, ssh
📌 Using Docker driver with root privileges
👍 Starting "after-bugfix-17648" primary control-plane node in "after-bugfix-17648" cluster
🚜 Pulling base image v0.0.47-1752142599-21053 ...
💾 Downloading Kubernetes v1.33.2 preload ...
 > preloaded-images-k8s-v18-v1...: 348.11 MiB / 348.11 MiB 100.00% 34.66 M
🔥 Creating docker container (CPUs=2, Memory=5166MB) ...
🐳 Preparing Kubernetes v1.33.2 on Docker 28.3.2 ...
 ▪ Generating certificates and keys ...
 ▪ Booting up control plane ...
 ▪ Configuring RBAC rules ...
🔗 Configuring CNI (Container Networking Interface) ...
🔎 Verifying Kubernetes components...
 ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟 Enabled addons: storage-provisioner, default-storageclass
👍 Starting "after-bugfix-17648-m02" worker node in "after-bugfix-17648" cluster
🚜 Pulling base image v0.0.47-1752142599-21053 ...
🔥 Creating docker container (CPUs=2, Memory=5166MB) ...
🌐 Found network options:
 ▪ NO_PROXY=192.168.49.2
🐳 Preparing Kubernetes v1.33.2 on Docker 28.3.2 ...
 ▪ env NO_PROXY=192.168.49.2
🔎 Verifying Kubernetes components...
👍 Starting "after-bugfix-17648-m03" worker node in "after-bugfix-17648" cluster
🚜 Pulling base image v0.0.47-1752142599-21053 ...
🔥 Creating docker container (CPUs=2, Memory=5166MB) ...
🌐 Found network options:
 ▪ NO_PROXY=192.168.49.2,192.168.49.3
🐳 Preparing Kubernetes v1.33.2 on Docker 28.3.2 ...
 ▪ env NO_PROXY=192.168.49.2
 ▪ env NO_PROXY=192.168.49.2,192.168.49.3
🔎 Verifying Kubernetes components...
🏄 Done! kubectl is now configured to use "after-bugfix-17648" cluster and "default" namespace by default
minikube on  bugfix-17648 [!] via  v1.24.0 on ☁️ took 52s 
at 13:38:38 ❯ out/minikube -p after-bugfix-17648 addons enable ingress
💡 ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
 ▪ Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.5.4
 ▪ Using image registry.k8s.io/ingress-nginx/controller:v1.12.3
 ▪ Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.5.4
🔎 Verifying ingress addon...
🌟 The 'ingress' addon is enabled
minikube on  bugfix-17648 [!] via  v1.24.0 on ☁️ took 9s 
at 13:39:13 ❯ out/minikube -p after-bugfix-17648 addons enable ingress-dns
💡 ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
 ▪ Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
🌟 The 'ingress-dns' addon is enabled
minikube on  bugfix-17648 [!] via  v1.24.0 on ☁️ 
at 13:43:58 ❯ out/minikube -p after-bugfix-17648 kubectl -- describe pod kube-ingress-dns-minikube --namespace=kube-system
Name: kube-ingress-dns-minikube
Namespace: kube-system
Priority: 0
Service Account: minikube-ingress-dns
Node: after-bugfix-17648/192.168.49.2
Start Time: 2025年7月25日 13:39:20 +0100
Labels: app=minikube-ingress-dns
 app.kubernetes.io/part-of=kube-system
Annotations: <none>
Status: Running
IP: 192.168.49.2
IPs:
 IP: 192.168.49.2
Containers:
 minikube-ingress-dns:
 Container ID: docker://be746a662af62eef900f0d4a1292213adf483c038c5b1dae7046baca380faa00
 Image: gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c
 Image ID: docker-pullable://gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c
 Port: 53/UDP
 Host Port: 53/UDP
 State: Running
 Started: 2025年7月25日 13:39:26 +0100
 Ready: True
 Restart Count: 0
 Environment:
 DNS_PORT: 53
 POD_IP: (v1:status.podIP)
 Mounts:
 /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dbpqx (ro)
Conditions:
 Type Status
 PodReadyToStartContainers True 
 Initialized True 
 Ready True 
 ContainersReady True 
 PodScheduled True 
Volumes:
 kube-api-access-dbpqx:
 Type: Projected (a volume that contains injected data from multiple sources)
 TokenExpirationSeconds: 3607
 ConfigMapName: kube-root-ca.crt
 Optional: false
 DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: minikube.k8s.io/primary=true
Tolerations: node-role.kubernetes.io/master:NoSchedule
 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
 Type Reason Age From Message
 ---- ------ ---- ---- -------
 Normal Scheduled 4m44s default-scheduler Successfully assigned kube-system/kube-ingress-dns-minikube to after-bugfix-17648
 Normal Pulling 4m44s kubelet Pulling image "gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c"
 Normal Pulled 4m38s kubelet Successfully pulled image "gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c" in 5.364s (5.364s including waiting). Image size: 190556102 bytes.
 Normal Created 4m38s kubelet Created container: minikube-ingress-dns
 Normal Started 4m38s kubelet Started container minikube-ingress-dns
minikube on  bugfix-17648 [!] via  v1.24.0 on ☁️ took 2s 
at 13:39:53 ❯ out/minikube -p after-bugfix-17648 kubectl -- apply -f https://raw.githubusercontent.com/kubernetes/minikube/master/deploy/addons/ingress-dns/example/example.yaml
deployment.apps/hello-world-app created
ingress.networking.k8s.io/example-ingress created
service/hello-world-app created
service/hello-world-app created
minikube on  bugfix-17648 [!] via  v1.24.0 on ☁️ 
at 13:40:11 ❯ out/minikube -p after-bugfix-17648 kubectl -- get po
NAME READY STATUS RESTARTS AGE
hello-world-app-7d9564db4-v7nf8 1/1 Running 0 21s
minikube on  bugfix-17648 [!] via  v1.24.0 on ☁️ 
at 13:40:32 ❯ out/minikube -p after-bugfix-17648 kubectl -- get ing --all-namespaces
NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
kube-system example-ingress nginx hello-john.test,hello-jane.test 192.168.49.2 80 45s
minikube on  bugfix-17648 [!] via  v1.24.0 on ☁️ 
at 13:40:56 ❯ nslookup hello-john.test $(out/minikube -p after-bugfix-17648 ip)
Server: 192.168.49.2
Address: 192.168.49.2#53
Non-authoritative answer:
Name: hello-john.test
Address: 192.168.49.2
Name: hello-john.test
Address: 192.168.49.2
minikube on  bugfix-17648 [!] via  v1.24.0 on ☁️ 
at 13:41:12 ❯ nslookup hello-jane.test $(out/minikube -p after-bugfix-17648 ip)
Server: 192.168.49.2
Address: 192.168.49.2#53
Non-authoritative answer:
Name: hello-jane.test
Address: 192.168.49.2
Name: hello-jane.test
Address: 192.168.49.2
daniel-ciaglia reacted with heart emoji

Copy link
Contributor Author

fbyrne commented Jul 29, 2025

@medyagh bump on this

Copy link
Contributor Author

fbyrne commented Jul 30, 2025
edited
Loading

/cc @kubernetes/sig-cluster-lifecycle

Copy link
Contributor

@fbyrne: GitHub didn't allow me to request PR reviews from the following users: fbyrne.

Note that only kubernetes members and repo collaborators can review this PR, and authors cannot review their own PRs.

In response to this:

/cc

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Copy link
Contributor Author

fbyrne commented Aug 20, 2025

@medyagh bump on this its been 3 weeks

Copy link
Contributor Author

fbyrne commented Aug 22, 2025

@medyagh @prezha can you take a look at this please. Its the second time ive opened this PR and its been pending for now for a month.
Its a small PR and shouldnt take long.

Copy link
Contributor

prezha commented Sep 6, 2025

hey @fbyrne, really sorry for not responding sooner, i missed this one

@medyagh, i think the proposed fix is ok: it would ensure that the ingress-dns pod is scheduled on the primary node for multi-node clusters, and thus ensure it's reacheable using the primary $(minikube ip) address

i remember i had to do something similar in #13439

thanks for your pr, @fbyrne !

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Sep 6, 2025
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: fbyrne , prezha

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:
  • (削除) OWNERS (削除ここまで) [prezha]

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Sep 6, 2025
@prezha prezha changed the title (削除) 17648 Fix to schedule the ingress-dns pod on the minikube primary nod... (削除ここまで) (追記) Ensure ingress-dns pod is scheduled on the primary node (追記ここまで) Sep 6, 2025
Copy link
Contributor

prezha commented Sep 6, 2025

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Sep 6, 2025
Copy link

kvm2 driver with docker runtime

┌────────────────┬──────────┬────────────────────────┐
│ COMMAND │ MINIKUBE │ MINIKUBE ( PR 21132 ) │
├────────────────┼──────────┼────────────────────────┤
│ minikube start │ 48.5s │ 48.6s │
│ enable ingress │ 15.6s │ 15.6s │
└────────────────┴──────────┴────────────────────────┘

Times for minikube ingress: 15.7s 15.7s 15.3s 15.3s 16.2s
Times for minikube (PR 21132) ingress: 15.2s 15.8s 15.8s 15.7s 15.8s

Times for minikube start: 47.4s 51.8s 47.7s 45.5s 50.2s
Times for minikube (PR 21132) start: 47.5s 48.8s 48.2s 50.1s 48.4s

docker driver with docker runtime

┌────────────────┬──────────┬────────────────────────┐
│ COMMAND │ MINIKUBE │ MINIKUBE ( PR 21132 ) │
├────────────────┼──────────┼────────────────────────┤
│ minikube start │ 23.7s │ 24.4s │
│ enable ingress │ 13.5s │ 13.6s │
└────────────────┴──────────┴────────────────────────┘

Times for minikube start: 22.5s 23.6s 25.7s 22.9s 23.7s
Times for minikube (PR 21132) start: 27.4s 23.4s 22.5s 23.2s 25.5s

Times for minikube ingress: 13.6s 13.6s 13.6s 13.1s 13.6s
Times for minikube (PR 21132) ingress: 13.6s 13.6s 13.6s 13.6s 13.6s

docker driver with containerd runtime

┌────────────────┬──────────┬────────────────────────┐
│ COMMAND │ MINIKUBE │ MINIKUBE ( PR 21132 ) │
├────────────────┼──────────┼────────────────────────┤
│ minikube start │ 23.0s │ 22.1s │
│ enable ingress │ 31.1s │ 27.3s │
└────────────────┴──────────┴────────────────────────┘

Times for minikube (PR 21132) start: 24.1s 23.9s 21.1s 19.8s 21.3s
Times for minikube start: 23.9s 22.7s 21.7s 21.3s 25.3s

Times for minikube ingress: 24.1s 28.1s 40.1s 40.1s 23.1s
Times for minikube (PR 21132) ingress: 24.1s 24.1s 40.1s 24.1s 24.1s

Copy link

Here are the number of top 10 failed tests in each environments with lowest flake rate.

Environment Test Name Flake Rate
Docker_Linux_containerd_arm64 (1 failed) TestAddons/serial/Volcano(gopogh) 0.00% (chart)
Docker_Linux_crio (6 failed) TestFunctional/parallel/ServiceCmdConnect(gopogh) 0.00% (chart)
Docker_Linux_crio (6 failed) TestFunctional/parallel/ServiceCmd/DeployApp(gopogh) 0.00% (chart)
Docker_Linux_crio (6 failed) TestFunctional/parallel/ServiceCmd/HTTPS(gopogh) 0.00% (chart)
Docker_Linux_crio (6 failed) TestFunctional/parallel/ServiceCmd/Format(gopogh) 0.00% (chart)
Docker_Linux_crio (6 failed) TestFunctional/parallel/ServiceCmd/URL(gopogh) 0.00% (chart)
Docker_Linux_containerd (1 failed) TestFunctional/parallel/MountCmd/specific-port(gopogh) 2.04% (chart)

Besides the following environments also have failed tests:

To see the flake rates of all tests by environment, click here.

Copy link

Thanks @prezha ,

I think the test failures above are just some random failures.

Whats the process for this?

Copy link
Contributor

@fbyrne: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-minikube-integration 58bc2bd link true /test pull-minikube-integration
integration-docker-docker-linux-x86-64 58bc2bd link true /test integration-docker-docker-linux-x86-64
integration-docker-crio-linux-x86-64 58bc2bd link true /test integration-docker-crio-linux-x86-64
integration-kvm-docker-linux-x86-64 58bc2bd link true /test integration-kvm-docker-linux-x86-64
integration-none-docker-linux-x86-64 58bc2bd link true /test integration-none-docker-linux-x86-64
integration-kvm-containerd-linux-x86-64 58bc2bd link true /test integration-kvm-containerd-linux-x86-64
integration-docker-containerd-linux-x86-64 58bc2bd link true /test integration-docker-containerd-linux-x86-64
integration-kvm-crio-linux-x86-64 58bc2bd link true /test integration-kvm-crio-linux-x86-64

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Reviewers

@medyagh medyagh Awaiting requested review from medyagh

@prezha prezha Awaiting requested review from prezha

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files.

Projects

None yet

Milestone

No milestone

Development

Successfully merging this pull request may close these issues.

Ingress-dns addon now working as expected for multinode clusters

AltStyle によって変換されたページ (->オリジナル) /