Create a hybrid cluster on Compute Engine VMs
Stay organized with collections
Save and categorize content based on your preferences.
This page shows you how to set up a hybrid cluster in high availability (HA) mode using Virtual Machines (VMs) running on Compute Engine.
You can try out Google Distributed Cloud software-only quickly and without having to prepare any hardware. Completing the steps on this page provides you with a working test environment that runs on Compute Engine.
To try Google Distributed Cloud software-only on Compute Engine VMs, complete the following steps:
- Create six VMs in Compute Engine
- Create a
vxlannetwork between all VMs with L2 connectivity - Install prerequisites for Google Distributed Cloud
- Deploy a hybrid cluster
- Verify your cluster
Before you begin
- Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get 300ドル in free credits to run, test, and deploy workloads.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
Roles required to select or create a project
- Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
-
Create a project: To create a project, you need the Project Creator role
(
roles/resourcemanager.projectCreator), which contains theresourcemanager.projects.createpermission. Learn how to grant roles.
-
Verify that billing is enabled for your Google Cloud project.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
Roles required to select or create a project
- Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
-
Create a project: To create a project, you need the Project Creator role
(
roles/resourcemanager.projectCreator), which contains theresourcemanager.projects.createpermission. Learn how to grant roles.
-
Verify that billing is enabled for your Google Cloud project.
- Make a note of the project ID because you need it to set an environment variable that is used in the scripts and commands on this page. If you selected an existing project, make sure that you are either a project owner or editor.
-
On your Linux workstation, make sure you have installed the latest
Google Cloud CLI, the command line tool for
interacting with Google Cloud. If you already have gcloud CLI
installed, update its components by running the following command:
gcloud components update
Depending on how the gcloud CLI was installed, you might see the following message:
"You cannot perform this action because the Google Cloud CLI component manager is disabled for this installation. You can run the following command to achieve the same result for this installation:"Follow the instructions to copy and paste the command to update the components.
The steps in this guide are taken from the installation script in the
anthos-samples
repository. The
FAQ section
has more information on how to customize this script to work with some popular
variations.
Create six VMs in Compute Engine
Complete these steps to create the following VMs:
- One VM for the admin workstation. An admin workstation hosts command-line interface (CLI) tools and configuration files to provision clusters during installation, and CLI tools for interacting with provisioned clusters post-installation. The admin workstation will have access to all the other nodes in the cluster using SSH.
- Three VMs for the three control plane nodes needed to run the Google Distributed Cloud control plane.
- Two VMs for the two worker nodes needed to run workloads on the Google Distributed Cloud cluster.
Setup environment variables:
exportPROJECT_ID=PROJECT_ID exportZONE=ZONE exportCLUSTER_NAME=CLUSTER_NAME exportBMCTL_VERSION=1.33.200-gke.70For the
ZONE, you can useus-central1-aor any of the other Compute Engine zones .Run the following commands to log in with your Google Account and set your project as the default:
gcloudauthlogin gcloudconfigsetproject$PROJECT_ID gcloudconfigsetcompute/zone$ZONECreate the
baremetal-gcrservice account and key:gcloudiamservice-accountscreatebaremetal-gcr gcloudiamservice-accountskeyscreatebm-gcr.json\ --iam-account=baremetal-gcr@"${PROJECT_ID}".iam.gserviceaccount.comEnable Google Cloud APIs and services:
gcloudservicesenable\ anthos.googleapis.com\ anthosaudit.googleapis.com\ anthosgke.googleapis.com\ cloudresourcemanager.googleapis.com\ connectgateway.googleapis.com\ container.googleapis.com\ gkeconnect.googleapis.com\ gkehub.googleapis.com\ serviceusage.googleapis.com\ stackdriver.googleapis.com\ monitoring.googleapis.com\ logging.googleapis.com\ opsconfigmonitoring.googleapis.com\ compute.googleapis.com\ gkeonprem.googleapis.com\ iam.googleapis.com\ kubernetesmetadata.googleapis.comGive the
baremetal-gcrservice account additional permissions to avoid needing multiple service accounts for different APIs and services:gcloudprojectsadd-iam-policy-binding"$PROJECT_ID"\ --member="serviceAccount:baremetal-gcr@$PROJECT_ID.iam.gserviceaccount.com"\ --role="roles/gkehub.connect"\ --no-user-output-enabled gcloudprojectsadd-iam-policy-binding"$PROJECT_ID"\ --member="serviceAccount:baremetal-gcr@$PROJECT_ID.iam.gserviceaccount.com"\ --role="roles/gkehub.admin"\ --no-user-output-enabled gcloudprojectsadd-iam-policy-binding"$PROJECT_ID"\ --member="serviceAccount:baremetal-gcr@$PROJECT_ID.iam.gserviceaccount.com"\ --role="roles/logging.logWriter"\ --no-user-output-enabled gcloudprojectsadd-iam-policy-binding"$PROJECT_ID"\ --member="serviceAccount:baremetal-gcr@$PROJECT_ID.iam.gserviceaccount.com"\ --role="roles/monitoring.metricWriter"\ --no-user-output-enabled gcloudprojectsadd-iam-policy-binding"$PROJECT_ID"\ --member="serviceAccount:baremetal-gcr@$PROJECT_ID.iam.gserviceaccount.com"\ --role="roles/monitoring.dashboardEditor"\ --no-user-output-enabled gcloudprojectsadd-iam-policy-binding"$PROJECT_ID"\ --member="serviceAccount:baremetal-gcr@$PROJECT_ID.iam.gserviceaccount.com"\ --role="roles/stackdriver.resourceMetadata.writer"\ --no-user-output-enabled gcloudprojectsadd-iam-policy-binding"$PROJECT_ID"\ --member="serviceAccount:baremetal-gcr@$PROJECT_ID.iam.gserviceaccount.com"\ --role="roles/opsconfigmonitoring.resourceMetadata.writer"\ --no-user-output-enabled gcloudprojectsadd-iam-policy-binding"$PROJECT_ID"\ --member="serviceAccount:baremetal-gcr@$PROJECT_ID.iam.gserviceaccount.com"\ --role="roles/kubernetesmetadata.publisher"\ --no-user-output-enabled gcloudprojectsadd-iam-policy-binding"$PROJECT_ID"\ --member="serviceAccount:baremetal-gcr@$PROJECT_ID.iam.gserviceaccount.com"\ --role="roles/monitoring.viewer"\ --no-user-output-enabled gcloudprojectsadd-iam-policy-binding"$PROJECT_ID"\ --member="serviceAccount:baremetal-gcr@$PROJECT_ID.iam.gserviceaccount.com"\ --role="roles/serviceusage.serviceUsageViewer"\ --no-user-output-enabledCreate the variables and arrays needed for all the commands on this page:
MACHINE_TYPE=n1-standard-8 VM_PREFIX=abm VM_WS=$VM_PREFIX-ws VM_CP1=$VM_PREFIX-cp1 VM_CP2=$VM_PREFIX-cp2 VM_CP3=$VM_PREFIX-cp3 VM_W1=$VM_PREFIX-w1 VM_W2=$VM_PREFIX-w2 declare-aVMs=("$VM_WS""$VM_CP1""$VM_CP2""$VM_CP3""$VM_W1""$VM_W2") declare-aIPs=()Use the following loop to create six VMs:
forvmin"${VMs[@]}" do gcloudcomputeinstancescreate"$vm"\ --image-family=ubuntu-2204-lts--image-project=ubuntu-os-cloud\ --zone="${ZONE}"\ --boot-disk-size200G\ --boot-disk-typepd-ssd\ --can-ip-forward\ --networkdefault\ --tagshttp-server,https-server\ --min-cpu-platform"Intel Haswell"\ --enable-nested-virtualization\ --scopescloud-platform\ --machine-type"$MACHINE_TYPE"\ --metadata"cluster_id=${CLUSTER_NAME},bmctl_version=${BMCTL_VERSION}" IP=$(gcloudcomputeinstancesdescribe"$vm"--zone"${ZONE}"\ --format='get(networkInterfaces[0].networkIP)') IPs+=("$IP") doneThis command creates VM instances with the following names:
- abm-ws: The VM for the admin workstation.
- abm-cp1, abm-cp2, abm-cp3: The VMs for the control plane nodes.
- abm-w1, abm-w2: The VMs for the nodes that run workloads.
Use the following loop to verify that SSH is ready on all VMs:
forvmin"${VMs[@]}" do while!gcloudcomputesshroot@"$vm"--zone"${ZONE}"--command"printf 'SSH to $vm succeeded\n'" do printf"Trying to SSH into %s failed. Sleeping for 5 seconds. zzzZZzzZZ""$vm" sleep5 done done
Create a vxlan network with L2 connectivity between VMs
Use the standard vxlan functionality of Linux to create a network that
connects all the VMs with L2 connectivity.
The following command contains two loops that perform the following actions:
- Use SSH to access each VM.
- Update and install needed packages.
Execute the required commands to configure the network with
vxlan.i=2# We start from 10.200.0.2/24 forvmin"${VMs[@]}" do gcloudcomputesshroot@"$vm"--zone"${ZONE}" << EOF apt-get-qqupdate > /dev/null apt-get-qqinstall-yjq > /dev/null set-x iplinkaddvxlan0typevxlanid42devens4dstport0 current_ip=\$(ip--jsonashowdevens4|jq'.[0].addr_info[0].local'-r) printf"VM IP address is: \$current_ip" foripin${IPs[@]};do if["\$ip"!="\$current_ip"];then bridgefdbappendto00:00:00:00:00:00dst\$ipdevvxlan0 fi done ipaddradd10.200.0.$i/24devvxlan0 iplinksetupdevvxlan0 EOF i=$((i+1)) done
You now have L2 connectivity within the 10.200.0.0/24 network. The VMs have the following IP addresses:
- Admin workstation VM: 10.200.0.2
- VMs running the control plane nodes:
- 10.200.0.3
- 10.200.0.4
- 10.200.0.5
- VMs running the worker nodes:
- 10.200.0.6
- 10.200.0.7
Install prerequisites for Google Distributed Cloud
You need to install the following tools on the admin workstation before installing Google Distributed Cloud:
bmctlkubectl- Docker
To install the tools and prepare for Google Distributed Cloud installation:
Run the following commands to download the service account key to the admin workstation and install the required tools:
gcloudcomputesshroot@$VM_WS--zone"${ZONE}" << EOF set-x exportPROJECT_ID=\$(gcloudconfigget-valueproject) BMCTL_VERSION=\$(curlhttp://metadata.google.internal/computeMetadata/v1/instance/attributes/bmctl_version-H"Metadata-Flavor: Google") exportBMCTL_VERSION gcloudiamservice-accountskeyscreatebm-gcr.json\ --iam-account=baremetal-gcr@\${PROJECT_ID}.iam.gserviceaccount.com curl-LO"https://storage.googleapis.com/kubernetes-release/release/$(curl-shttps://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl" chmod+xkubectl mvkubectl/usr/local/sbin/ mkdirbaremetal && cdbaremetal gsutilcpgs://anthos-baremetal-release/bmctl/$BMCTL_VERSION/linux-amd64/bmctl. chmoda+xbmctl mvbmctl/usr/local/sbin/ cd~ printf"Installing docker" curl-fsSLhttps://get.docker.com-oget-docker.sh shget-docker.sh EOFRun the following commands to ensure that
root@10.200.0.xworks. The commands perform these tasks:- Generate a new SSH key on the admin workstation.
- Add the public key to all the other VMs in the deployment.
gcloudcomputesshroot@$VM_WS--zone"${ZONE}" << EOF set-x ssh-keygen-trsa-N""-f/root/.ssh/id_rsa sed's/ssh-rsa/root:ssh-rsa/'~/.ssh/id_rsa.pub > ssh-metadata forvmin${VMs[@]} do gcloudcomputeinstancesadd-metadata\$vm--zone${ZONE}--metadata-from-filessh-keys=ssh-metadata done EOF
Deploy a hybrid cluster
The following code block contains all commands and configurations needed to complete the following tasks:
- Create the configuration file for the needed hybrid cluster.
- Run the preflight checks.
- Deploy the cluster.
gcloudcomputesshroot@$VM_WS--zone"${ZONE}"<<EOF
set-x
exportPROJECT_ID=$(gcloudconfigget-valueproject)
CLUSTER_NAME=\$(curlhttp://metadata.google.internal/computeMetadata/v1/instance/attributes/cluster_id-H"Metadata-Flavor: Google")
BMCTL_VERSION=\$(curlhttp://metadata.google.internal/computeMetadata/v1/instance/attributes/bmctl_version-H"Metadata-Flavor: Google")
exportCLUSTER_NAME
exportBMCTL_VERSION
bmctlcreateconfig-c\$CLUSTER_NAME
cat > bmctl-workspace/\$CLUSTER_NAME/\$CLUSTER_NAME.yaml << EOB
---
gcrKeyPath:/root/bm-gcr.json
sshPrivateKeyPath:/root/.ssh/id_rsa
gkeConnectAgentServiceAccountKeyPath:/root/bm-gcr.json
gkeConnectRegisterServiceAccountKeyPath:/root/bm-gcr.json
cloudOperationsServiceAccountKeyPath:/root/bm-gcr.json
---
apiVersion:v1
kind:Namespace
metadata:
name:cluster-\$CLUSTER_NAME
---
apiVersion:baremetal.cluster.gke.io/v1
kind:Cluster
metadata:
name:\$CLUSTER_NAME
namespace:cluster-\$CLUSTER_NAME
spec:
type:hybrid
anthosBareMetalVersion:\$BMCTL_VERSION
gkeConnect:
projectID:\$PROJECT_ID
controlPlane:
nodePoolSpec:
clusterName:\$CLUSTER_NAME
nodes:
-address:10.200.0.3
-address:10.200.0.4
-address:10.200.0.5
clusterNetwork:
pods:
cidrBlocks:
-192.168.0.0/16
services:
cidrBlocks:
-172.26.232.0/24
loadBalancer:
mode:bundled
ports:
controlPlaneLBPort:443
vips:
controlPlaneVIP:10.200.0.49
ingressVIP:10.200.0.50
addressPools:
-name:pool1
addresses:
-10.200.0.50-10.200.0.70
clusterOperations:
# might need to be this location
location:us-central1
projectID:\$PROJECT_ID
storage:
lvpNodeMounts:
path:/mnt/localpv-disk
storageClassName:node-disk
lvpShare:
numPVUnderSharedPath:5
path:/mnt/localpv-share
storageClassName:local-shared
nodeConfig:
podDensity:
maxPodsPerNode:250
---
apiVersion:baremetal.cluster.gke.io/v1
kind:NodePool
metadata:
name:node-pool-1
namespace:cluster-\$CLUSTER_NAME
spec:
clusterName:\$CLUSTER_NAME
nodes:
-address:10.200.0.6
-address:10.200.0.7
EOB
bmctlcreatecluster-c\$CLUSTER_NAME
EOFVerify your cluster
You can find your cluster's kubeconfig file on the admin workstation in the
bmctl-workspace directory of the root account. To verify your deployment,
complete the following steps.
Use SSH to access the admin workstation as root:
gcloudcomputesshroot@abm-ws--zone${ZONE}You can ignore any messages about updating the VM and complete this tutorial. If you plan to keep the VMs as a test environment, you might want to update the OS or upgrade to the next release as described in the Ubuntu documentation.
Set the
KUBECONFIGenvironment variable with the path to the cluster's configuration file to runkubectlcommands on the cluster.exportclusterid=CLUSTER_NAME exportKUBECONFIG=$HOME/bmctl-workspace/$clusterid/$clusterid-kubeconfig kubectlgetnodesSet the current context in an environment variable:
exportCONTEXT="$(kubectlconfigcurrent-context)"Run the following gcloud CLI command. This command:
- Grants your user account the Kubernetes
clusterrole/cluster-adminrole on the cluster. - Configures the cluster so that you can run
kubectlcommands on your local computer without having to SSH to the admin workstation.
Replace
GOOGLE_ACCOUNT_EMAILwith the email address that is associated with your Google Cloud account. For example:--users=alex@example.com.gcloudcontainerfleetmembershipsgenerate-gateway-rbac\ --membership=CLUSTER_NAME\ --role=clusterrole/cluster-admin\ --users=GOOGLE_ACCOUNT_EMAIL\ --project=PROJECT_ID\ --kubeconfig=$KUBECONFIG\ --context=$CONTEXT\ --applyThe output of this command is similar to the following, which is truncated for readability:
Validating input arguments. Specified Cluster Role is: clusterrole/cluster-admin Generated RBAC policy is: -------------------------------------------- ... Applying the generate RBAC policy to cluster with kubeconfig: /root/bmctl-workspace/CLUSTER_NAME/CLUSTER_NAME-kubeconfig, context: CLUSTER_NAME-admin@CLUSTER_NAME Writing RBAC policy for user: GOOGLE_ACCOUNT_EMAIL to cluster. Successfully applied the RBAC policy to cluster.- Grants your user account the Kubernetes
When you are finished exploring, enter exit to log out of the admin workstation.
Get the
kubeconfigentry that can access the cluster through the Connect gateway.gcloudcontainerfleetmembershipsget-credentialsCLUSTER_NAMEThe output is similar to the following:
Starting to build Gateway kubeconfig... Current project_id: PROJECT_ID A new kubeconfig entry "connectgateway_PROJECT_ID_global_CLUSTER_NAME" has been generated and set as the current context.You can now run
kubectlcommands through the connect gateway:kubectlgetnodes kubectlgetnamespaces
Log in to your cluster from Google Cloud console
To observe your workloads on your cluster in the Google Cloud console, you need to log in to the cluster. Before you log in to the console for the first time, you need to configure an authentication method. The easiest authentication method to configure is Google identity. This authentication method lets you log in using the email address associated with your Google Cloud account.
The gcloud container fleet memberships generate-gateway-rbac command that
you ran in the previous section configures the cluster so that you can log in
with your Google identity.
In the Google Cloud console, go to the GKE Clusters page.
Click Actions next to the registered cluster, then click Login.
Select Use your Google identity to log in.
Click Login.
Clean up
Connect to the admin workstation to reset the cluster VMs to their state prior to installation and unregister the cluster from your Google Cloud project:
gcloudcomputesshroot@abm-ws--zone${ZONE} << EOF set-x exportclusterid=CLUSTER_NAME bmctlreset-c\$clusterid EOFList all VMs that have
abmin their name:gcloudcomputeinstanceslist|grep'abm'Verify that you're fine with deleting all VMs that contain
abmin the name.After you've verified, you can delete
abmVMs by running the following command:gcloudcomputeinstanceslist--format="value(name)"|grep'abm'|xargsgcloud\ --quietcomputeinstancesdelete