A Proof-of-concept for using K3s to create a Kubernetes cluster deployed on nodes from different public cloud providers (AWS, GCP, Azure) utilizing Kilo
What’s possible with that?
- Automatic failover even on AZ and cloud level
- Cloud-agnostic setup to select the services and offers that suit the best (mix & match)
# create RSA key ssh-keygen -b 4096 -t rsa -f ~/.ssh/cloud-key
Copy the contents of the public key ~/.ssh/cloud-key.pub into .auto.tfvars as public_ssh_key (see .auto.tfvars.example). Terraform will automatically pick up this file.
You can also overwrite as follows
- Using the CLI
-varoption:terraform apply -var="public_ssh_key=..." - Using an environment variable:
export TF_VAR_public_ssh_key="..."
# init, plan, and apply infrastructure # use `-target=module.gcp_us_central1` to target specific modules terraform init terraform plan terraform apply # show resources and details terraform output terraform state list terraform state show module.aws_us_east_1.aws_instance.node # destroy infrastructure terraform destroy
-
(削除) Ensure all nodes use Debian 11 (削除ここまで) -
(削除) Open port UDP 51820 for WireGuard (inbound and outbound) (削除ここまで) -
(削除) Install WireGuard on all nodes (docs) (削除ここまで) -
(削除) Configure WireGuard network interface on all nodes (docs) (削除ここまで) -
(削除) Install K3s on all nodes (Conceptual Overview, Quick Start) (削除ここまで) -
(削除) Specify topology (annotating location and optionally region) (削除ここまで) -
(削除) Deploy Kilo on all nodes (削除ここまで) -
(削除) Figure out how to join the Azure node (削除ここまで) - Deploy traefik/whoami services to test connectivity
- Look into Cloud-init for cloud instance initialisation
- Enable cgroups v2 on the Azure node
- Annotating
locationandforce-endpointin order to make kilo aware of the topology