Deploying a highly available MySQL 5.6 cluster with DRBD on Compute Engine
Stay organized with collections
Save and categorize content based on your preferences.
This tutorial walks you through the process of deploying a MySQL 5.6 database to Google Cloud by using Distributed Replicated Block Device (DRBD) and Compute Engine. DRBD is a distributed replicated storage system for the Linux platform.
This tutorial is useful if you are a sysadmin, developer, engineer, database admin, or DevOps engineer. You might want to manage your own MySQL instance instead of using the managed service for several reasons, including:
- You're using cross-region instances of MySQL.
- You need to set parameters that are not available in the managed version of MySQL.
- You want to optimize performance in ways that are not settable in the managed version.
DRBD provides replication at the block device level. That means you don't have to configure replication in MySQL itself, and you get immediate DRBD benefits—for example, support for read load balancing and secure connections.
The tutorial uses the following:
No advanced knowledge is required in order to use these resources, although this this document does reference advanced capabilities like MySQL clustering, DRBD configuration, and Linux resource management.
Architecture
Pacemaker is a cluster resource manager. Corosync is a cluster communication and participation package, that's used by Pacemaker. In this tutorial, you use DRBD to replicate the MySQL disk from the primary instance to the standby instance. In order for clients to connect to the MySQL cluster, you also deploy an internal load balancer.
You deploy a Pacemaker-managed cluster of three compute instances. You install MySQL on two of the instances, which serve as your primary and standby instances. The third instance serves as a quorum device.
In a cluster, each node votes for the node that should be the active node—that is, the one that runs MySQL. In a two-node cluster, it takes only one vote to determine the active node. In such a case, the cluster behavior might lead to split-brain issues or downtime. Split-brain issues occur when both nodes take control because only one vote is needed in a two-node scenario. Downtime occurs when the node that shuts down is the one configured to always be the primary in case of connectivity loss. If the two nodes lose connectivity with each other, there's a risk that more than one cluster node assumes it's the active node.
Adding a quorum device prevents this situation. A quorum device serves as an
arbiter, where its only job is to cast a vote. This way, in a situation where
the database1 and database2 instances cannot communicate, this quorum device
node can communicate with one of the two instances and a majority can still be
reached.
The following diagram shows the architecture of the system described here.
Architecture showing a MySQL 5.6 database deployed to Google Cloud by using DRBD and Compute Engine
Objectives
- Create the cluster instances.
- Install MySQL and DRBD on two of the instances.
- Configure DRBD replication.
- Install Pacemaker on the instances.
- Configure Pacemaker clustering on the instances.
- Create an instance and configure it as a quorum device.
- Test failover.
Costs
Use the pricing calculator to generate a cost estimate based on your projected usage.
Before you begin
- Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get 300ドル in free credits to run, test, and deploy workloads.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
Roles required to select or create a project
- Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
-
Create a project: To create a project, you need the Project Creator
(
roles/resourcemanager.projectCreator), which contains theresourcemanager.projects.createpermission. Learn how to grant roles.
-
Verify that billing is enabled for your Google Cloud project.
-
Enable the Compute Engine API.
Roles required to enable APIs
To enable APIs, you need the Service Usage Admin IAM role (
roles/serviceusage.serviceUsageAdmin), which contains theserviceusage.services.enablepermission. Learn how to grant roles. -
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
Roles required to select or create a project
- Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
-
Create a project: To create a project, you need the Project Creator
(
roles/resourcemanager.projectCreator), which contains theresourcemanager.projects.createpermission. Learn how to grant roles.
-
Verify that billing is enabled for your Google Cloud project.
-
Enable the Compute Engine API.
Roles required to enable APIs
To enable APIs, you need the Service Usage Admin IAM role (
roles/serviceusage.serviceUsageAdmin), which contains theserviceusage.services.enablepermission. Learn how to grant roles.
In this tutorial, you enter commands using Cloud Shell unless otherwise noted.
When you finish the tasks that are described in this document, you can avoid continued billing by deleting the resources that you created. For more information, see Clean up.
Getting set up
In this section, you set up a service account, create environment variables, and reserve IP addresses.
Set up a service account for the cluster instances
Open Cloud Shell:
Create the service account:
gcloud iam service-accounts create mysql-instance \ --display-name "mysql-instance"Attach the roles needed for this tutorial to the service account:
gcloud projects add-iam-policy-binding ${DEVSHELL_PROJECT_ID} \ --member=serviceAccount:mysql-instance@${DEVSHELL_PROJECT_ID}.iam.gserviceaccount.com \ --role=roles/compute.instanceAdmin.v1 gcloud projects add-iam-policy-binding ${DEVSHELL_PROJECT_ID} \ --member=serviceAccount:mysql-instance@${DEVSHELL_PROJECT_ID}.iam.gserviceaccount.com \ --role=roles/compute.viewer gcloud projects add-iam-policy-binding ${DEVSHELL_PROJECT_ID} \ --member=serviceAccount:mysql-instance@${DEVSHELL_PROJECT_ID}.iam.gserviceaccount.com \ --role=roles/iam.serviceAccountUser
Create Cloud Shell environment variables
Create a file with the required environment variables for this tutorial:
cat<<EOF > ~/.mysqldrbdrc # Cluster instance names DATABASE1_INSTANCE_NAME=database1 DATABASE2_INSTANCE_NAME=database2 QUORUM_INSTANCE_NAME=qdevice CLIENT_INSTANCE_NAME=mysql-client # Cluster IP addresses DATABASE1_INSTANCE_IP="10.140.0.2" DATABASE2_INSTANCE_IP="10.140.0.3" QUORUM_INSTANCE_IP="10.140.0.4" ILB_IP="10.140.0.6" # Cluster zones and region DATABASE1_INSTANCE_ZONE="asia-east1-a" DATABASE2_INSTANCE_ZONE="asia-east1-b" QUORUM_INSTANCE_ZONE="asia-east1-c" CLIENT_INSTANCE_ZONE="asia-east1-c" CLUSTER_REGION="asia-east1" EOFLoad the environment variables in the current session and set Cloud Shell to automatically load the variables on future sign-ins:
source~/.mysqldrbdrc grep-q-F"source ~/.mysqldrbdrc"~/.bashrc||echo"source ~/.mysqldrbdrc" >> ~/.bashrc
Reserve IP addresses
In Cloud Shell, reserve an internal IP address for each of the three cluster nodes:
gcloud compute addresses create ${DATABASE1_INSTANCE_NAME} ${DATABASE2_INSTANCE_NAME} ${QUORUM_INSTANCE_NAME} \ --region=${CLUSTER_REGION} \ --addresses "${DATABASE1_INSTANCE_IP},${DATABASE2_INSTANCE_IP},${QUORUM_INSTANCE_IP}" \ --subnet=default
Creating the Compute Engine instances
In the following steps, the cluster instances use Debian 9 and the client instances use Ubuntu 16.
In Cloud Shell, create a MySQL instance named
database1in zoneasia-east1-a:gcloud compute instances create ${DATABASE1_INSTANCE_NAME} \ --zone=${DATABASE1_INSTANCE_ZONE} \ --machine-type=n1-standard-2 \ --network-tier=PREMIUM \ --maintenance-policy=MIGRATE \ --image-family=debian-9 \ --image-project=debian-cloud \ --boot-disk-size=50GB \ --boot-disk-type=pd-standard \ --boot-disk-device-name=${DATABASE1_INSTANCE_NAME} \ --create-disk=mode=rw,size=300,type=pd-standard,name=disk-1 \ --private-network-ip=${DATABASE1_INSTANCE_NAME} \ --tags=mysql --service-account=mysql-instance@${DEVSHELL_PROJECT_ID}.iam.gserviceaccount.com \ --scopes="https://www.googleapis.com/auth/compute,https://www.googleapis.com/auth/servicecontrol,https://www.googleapis.com/auth/service.management.readonly" \ --metadata="DATABASE1_INSTANCE_IP=${DATABASE1_INSTANCE_IP},DATABASE2_INSTANCE_IP=${DATABASE2_INSTANCE_IP},DATABASE1_INSTANCE_NAME=${DATABASE1_INSTANCE_NAME},DATABASE2_INSTANCE_NAME=${DATABASE2_INSTANCE_NAME},QUORUM_INSTANCE_NAME=${QUORUM_INSTANCE_NAME},DATABASE1_INSTANCE_ZONE=${DATABASE1_INSTANCE_ZONE},DATABASE2_INSTANCE_ZONE=${DATABASE2_INSTANCE_ZONE}"Create a MySQL instance named
database2in zoneasia-east1-b:gcloud compute instances create ${DATABASE2_INSTANCE_NAME} \ --zone=${DATABASE2_INSTANCE_ZONE} \ --machine-type=n1-standard-2 \ --network-tier=PREMIUM \ --maintenance-policy=MIGRATE \ --image-family=debian-9 \ --image-project=debian-cloud \ --boot-disk-size=50GB \ --boot-disk-type=pd-standard \ --boot-disk-device-name=${DATABASE2_INSTANCE_NAME} \ --create-disk=mode=rw,size=300,type=pd-standard,name=disk-2 \ --private-network-ip=${DATABASE2_INSTANCE_NAME} \ --tags=mysql \ --service-account=mysql-instance@${DEVSHELL_PROJECT_ID}.iam.gserviceaccount.com \ --scopes="https://www.googleapis.com/auth/compute,https://www.googleapis.com/auth/servicecontrol,https://www.googleapis.com/auth/service.management.readonly" \ --metadata="DATABASE1_INSTANCE_IP=${DATABASE1_INSTANCE_IP},DATABASE2_INSTANCE_IP=${DATABASE2_INSTANCE_IP},DATABASE1_INSTANCE_NAME=${DATABASE1_INSTANCE_NAME},DATABASE2_INSTANCE_NAME=${DATABASE2_INSTANCE_NAME},QUORUM_INSTANCE_NAME=${QUORUM_INSTANCE_NAME},DATABASE1_INSTANCE_ZONE=${DATABASE1_INSTANCE_ZONE},DATABASE2_INSTANCE_ZONE=${DATABASE2_INSTANCE_ZONE}"Create a quorum node for use by Pacemaker in zone
asia-east1-c:gcloud compute instances create ${QUORUM_INSTANCE_NAME} \ --zone=${QUORUM_INSTANCE_ZONE} \ --machine-type=n1-standard-1 \ --network-tier=PREMIUM \ --maintenance-policy=MIGRATE \ --image-family=debian-9 \ --image-project=debian-cloud \ --boot-disk-size=10GB \ --boot-disk-type=pd-standard \ --boot-disk-device-name=${QUORUM_INSTANCE_NAME} \ --private-network-ip=${QUORUM_INSTANCE_NAME}Create a MySQL client instance:
gcloud compute instances create ${CLIENT_INSTANCE_NAME} \ --image-family=ubuntu-1604-lts \ --image-project=ubuntu-os-cloud \ --tags=mysql-client \ --zone=${CLIENT_INSTANCE_ZONE} \ --boot-disk-size=10GB \ --metadata="ILB_IP=${ILB_IP}"
Installing and configuring DRBD
In this section, you install and configure the DRBD packages on the database1
and database2 instances, and then initiate DRBD replication from database1
to database2.
Configure DRBD on database1
In the Google Cloud console, go to the VM instances page:
In the
database1instance row, click SSH to connect to the instance.Create a file to retrieve and store instance metadata in environment variables:
sudobash-ccat<<EOF > ~/.varsrc DATABASE1_INSTANCE_IP=$(curl-s"http://metadata.google.internal/computeMetadata/v1/instance/attributes/DATABASE1_INSTANCE_IP"-H"Metadata-Flavor: Google") DATABASE2_INSTANCE_IP=$(curl-s"http://metadata.google.internal/computeMetadata/v1/instance/attributes/DATABASE2_INSTANCE_IP"-H"Metadata-Flavor: Google") DATABASE1_INSTANCE_NAME=$(curl-s"http://metadata.google.internal/computeMetadata/v1/instance/attributes/DATABASE1_INSTANCE_NAME"-H"Metadata-Flavor: Google") DATABASE2_INSTANCE_NAME=$(curl-s"http://metadata.google.internal/computeMetadata/v1/instance/attributes/DATABASE2_INSTANCE_NAME"-H"Metadata-Flavor: Google") DATABASE2_INSTANCE_ZONE=$(curl-s"http://metadata.google.internal/computeMetadata/v1/instance/attributes/DATABASE2_INSTANCE_ZONE"-H"Metadata-Flavor: Google") DATABASE1_INSTANCE_ZONE=$(curl-s"http://metadata.google.internal/computeMetadata/v1/instance/attributes/DATABASE1_INSTANCE_ZONE"-H"Metadata-Flavor: Google") QUORUM_INSTANCE_NAME=$(curl-s"http://metadata.google.internal/computeMetadata/v1/instance/attributes/QUORUM_INSTANCE_NAME"-H"Metadata-Flavor: Google") EOFLoad the metadata variables from file:
source~/.varsrcFormat the data disk:
sudobash-c"mkfs.ext4 -m 0 -F -E \ lazy_itable_init=0,lazy_journal_init=0,discard /dev/sdb"For a detailed description of
mkfs.ext4options, see the mkfs.ext4 manpage.Install DRBD:
sudoapt-yinstalldrbd8-utilsCreate the DRBD configuration file:
sudobash-c'cat <<EOF > /etc/drbd.d/global_common.conf global { usage-count no; } common { protocol C; } EOF'Create a DRBD resource file:
sudobash-c"cat <<EOF > /etc/drbd.d/r0.res resource r0 { meta-disk internal; device /dev/drbd0; net { allow-two-primaries no; after-sb-0pri discard-zero-changes; after-sb-1pri discard-secondary; after-sb-2pri disconnect; rr-conflict disconnect; } on database1 { disk /dev/sdb; address 10.140.0.2:7789; } on database2 { disk /dev/sdb; address 10.140.0.3:7789; } } EOF"Load the DRBD kernel module:
sudomodprobedrbdClear the contents of the
/dev/sdbdisk:sudoddif=/dev/zeroof=/dev/sdbbs=1kcount=1024Create the DRBD resource
r0:sudodrbdadmcreate-mdr0Bring up DRBD:
sudodrbdadmupr0Disable DRBD when the system starts, letting the cluster resource management software bring up all necessary services in order:
sudoupdate-rc.ddrbddisable
Configure DRBD on database2
You now install and configure the DRBD packages on the database2 instance.
- Connect to the
database2instance through SSH. Create a
.varsrcfile to retrieve and store instance metadata in environment variables:sudobash-ccat<<EOF > ~/.varsrc DATABASE1_INSTANCE_IP=$(curl-s"http://metadata.google.internal/computeMetadata/v1/instance/attributes/DATABASE1_INSTANCE_IP"-H"Metadata-Flavor: Google") DATABASE2_INSTANCE_IP=$(curl-s"http://metadata.google.internal/computeMetadata/v1/instance/attributes/DATABASE2_INSTANCE_IP"-H"Metadata-Flavor: Google") DATABASE1_INSTANCE_NAME=$(curl-s"http://metadata.google.internal/computeMetadata/v1/instance/attributes/DATABASE1_INSTANCE_NAME"-H"Metadata-Flavor: Google") DATABASE2_INSTANCE_NAME=$(curl-s"http://metadata.google.internal/computeMetadata/v1/instance/attributes/DATABASE2_INSTANCE_NAME"-H"Metadata-Flavor: Google") DATABASE2_INSTANCE_ZONE=$(curl-s"http://metadata.google.internal/computeMetadata/v1/instance/attributes/DATABASE2_INSTANCE_ZONE"-H"Metadata-Flavor: Google") DATABASE1_INSTANCE_ZONE=$(curl-s"http://metadata.google.internal/computeMetadata/v1/instance/attributes/DATABASE1_INSTANCE_ZONE"-H"Metadata-Flavor: Google") QUORUM_INSTANCE_NAME=$(curl-s"http://metadata.google.internal/computeMetadata/v1/instance/attributes/QUORUM_INSTANCE_NAME"-H"Metadata-Flavor: Google") EOFLoad the metadata variables from the file:
source~/.varsrcFormat the data disk:
sudobash-c"mkfs.ext4 -m 0 -F -E lazy_itable_init=0,lazy_journal_init=0,discard /dev/sdb"Install the DRBD packages:
sudoapt-yinstalldrbd8-utilsCreate the DRBD configuration file:
sudobash-c'cat <<EOF > /etc/drbd.d/global_common.conf global { usage-count no; } common { protocol C; } EOF'Create a DRBD resource file:
sudobash-c"cat <<EOF > /etc/drbd.d/r0.res resource r0 { meta-disk internal; device /dev/drbd0; net { allow-two-primaries no; after-sb-0pri discard-zero-changes; after-sb-1pri discard-secondary; after-sb-2pri disconnect; rr-conflict disconnect; } on ${DATABASE1_INSTANCE_NAME} { disk /dev/sdb; address ${DATABASE1_INSTANCE_IP}:7789; } on ${DATABASE2_INSTANCE_NAME} { disk /dev/sdb; address ${DATABASE2_INSTANCE_IP}:7789; } } EOF"Load the DRBD kernel module:
sudomodprobedrbdClear out the
/dev/sdbdisk:sudoddif=/dev/zeroof=/dev/sdbbs=1kcount=1024Create the DRBD resource
r0:sudodrbdadmcreate-mdr0Bring up DRBD:
sudodrbdadmupr0Disable DRBD when the system starts, letting the cluster resource management software bring up all necessary services in order:
sudoupdate-rc.ddrbddisable
Initiate DRBD replication from database1 to database2
- Connect to the
database1instance through SSH. Overwrite all
r0resources on the primary node:sudodrbdadm----overwrite-data-of-peerprimaryr0 sudomkfs.ext4-m0-F-Elazy_itable_init=0,lazy_journal_init=0,discard/dev/drbd0Verify the status of DRBD:
sudocat/proc/drbd|grep============The output looks like this:
[===================>] sync'ed:100.0% (208/307188)M
Mount
/dev/drbdto/srv:sudomount-odiscard,defaults/dev/drbd0/srv
Installing MySQL and Pacemaker
In this section, you install MySQL and Pacemaker on each instance.
Install MySQL on database1
- Connect to the
database1instance through SSH. Update the APT repository with the MySQL 5.6 package definitions:
sudobash-c'cat <<EOF > /etc/apt/sources.list.d/mysql.list deb http://repo.mysql.com/apt/debian/ stretch mysql-5.6\ndeb-src http://repo.mysql.com/apt/debian/ stretch mysql-5.6 EOF'Add the GPG keys to the APT
repository.srvfile:wget-O/tmp/RPM-GPG-KEY-mysqlhttps://repo.mysql.com/RPM-GPG-KEY-mysql sudoapt-keyadd/tmp/RPM-GPG-KEY-mysqlUpdate the package list:
sudoaptupdateInstall the MySQL server:
sudoapt-yinstallmysql-serverWhen prompted for a password, enter
DRBDha2.Stop the MySQL server:
sudo/etc/init.d/mysqlstopCreate the MySQL configuration file:
sudobash-c'cat <<EOF > /etc/mysql/mysql.conf.d/my.cnf [mysqld] bind-address = 0.0.0.0 # You may want to listen at localhost at the beginning datadir = /var/lib/mysql tmpdir = /srv/tmp user = mysql EOF'Create a temporary directory for the MySQL server (configured in
mysql.conf):sudomkdir/srv/tmp sudochmod1777/srv/tmpMove all MySQL data into the DRBD directory
/srv/mysql:sudomv/var/lib/mysql/srv/mysqlLink
/var/lib/mysqlto/srv/mysqlunder the DRBD replicated storage volume:sudoln-s/srv/mysql/var/lib/mysqlChange the
/srv/mysqlowner to amysqlprocess:sudochown-Rmysql:mysql/srv/mysqlRemove
InnoDBinitial data to make sure the disk is as clean as possible:sudobash-c"cd /srv/mysql && rm ibdata1 && rm ib_logfile*"InnoDB is a storage engine for the MySQL database management system.
Start MySQL:
sudo/etc/init.d/mysqlstartGrant access to root user for remote connections in order to test the deployment later:
mysql-uroot-pDRBDha2-e"GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'DRBDha2' WITH GRANT OPTION;"Disable automatic MySQL startup, which cluster resource management takes care of:
sudoupdate-rc.d-fmysqldisable
Install Pacemaker on database1
Load the metadata variables from the
.varsrcfile that you created earlier:source~/.varsrcStop the MySQL server:
sudo/etc/init.d/mysqlstopInstall Pacemaker:
sudoapt-yinstallpcsEnable
pcsd,corosync, andpacemakerat system start on the primary instance:sudoupdate-rc.d-fpcsdenable sudoupdate-rc.d-fcorosyncenable sudoupdate-rc.d-fpacemakerenableConfigure
corosyncto start beforepacemaker:sudoupdate-rc.dcorosyncdefaults2020 sudoupdate-rc.dpacemakerdefaults3010Set the cluster user password to
haCLUSTER3for authentication:sudobash-c"echo hacluster:haCLUSTER3 | chpasswd"Run the
corosync-keygenscript to generate a 128-bit cluster authorization key and write it into/etc/corosync/authkey:sudocorosync-keygen-lCopy
authkeyto thedatabase2instance. When prompted for a passphrase, pressEnter:sudochmod444/etc/corosync/authkey gcloudbetacomputescp/etc/corosync/authkey${DATABASE2_INSTANCE_NAME}:~/authkey--zone=${DATABASE2_INSTANCE_ZONE}--internal-ip sudochmod400/etc/corosync/authkeyCreate a Corosync cluster configuration file:
sudobash-c"cat <<EOF > /etc/corosync/corosync.conf totem { version: 2 cluster_name: mysql_cluster transport: udpu interface { ringnumber: 0 Bindnetaddr: ${DATABASE1_INSTANCE_IP} broadcast: yes mcastport: 5405 } } quorum { provider: corosync_votequorum two_node: 1 } nodelist { node { ring0_addr: ${DATABASE1_INSTANCE_NAME} name: ${DATABASE1_INSTANCE_NAME} nodeid: 1 } node { ring0_addr: ${DATABASE2_INSTANCE_NAME} name: ${DATABASE2_INSTANCE_NAME} nodeid: 2 } } logging { to_logfile: yes logfile: /var/log/corosync/corosync.log timestamp: on } EOF"The
totemsection configures the Totem protocol for reliable communication. Corosync uses this communication to control cluster membership, and it specifies how the cluster members should communicate with each other.The important settings in the setup are as follows:
transport: Specifies unicast mode (udpu).Bindnetaddr: Specifies the network address to which Corosync binds.nodelist: Defines the nodes in the cluster, and how they can be reached—in this case, thedatabase1anddatabase2nodes.quorum/two_node: By default, in a two-node cluster, no node will acquire a quorum. You can override this by specifying the value "1" fortwo_nodein thequorumsection.
This setup lets you configure the cluster and prepare it for later when you add a third node that will be a quorum device.
Create the service directory for
corosync:sudomkdir-p/etc/corosync/service.dConfigure
corosyncto be aware of Pacemaker:sudobash-c'cat <<EOF > /etc/corosync/service.d/pcmk service { name: pacemaker ver: 1 } EOF'Enable the
corosyncservice by default:sudobash-c'cat <<EOF > /etc/default/corosync # Path to corosync.conf COROSYNC_MAIN_CONFIG_FILE=/etc/corosync/corosync.conf # Path to authfile COROSYNC_TOTEM_AUTHKEY_FILE=/etc/corosync/authkey # Enable service by default START=yes EOF'Restart the
corosyncandpacemakerservices:sudoservicecorosyncrestart sudoservicepacemakerrestartInstall the Corosync quorum device package:
sudoapt-yinstallcorosync-qdeviceInstall a shell script to handle DRBD failure events:
sudobash-c'cat << 'EOF' > /var/lib/pacemaker/drbd_cleanup.sh #!/bin/sh if [ -z \$CRM_alert_version ]; then echo "\0ドル must be run by Pacemaker version 1.1.15 or later" exit 0 fi tstamp="\$CRM_alert_timestamp: " case \$CRM_alert_kind in resource) if [ \${CRM_alert_interval} = "0" ]; then CRM_alert_interval="" else CRM_alert_interval=" (\${CRM_alert_interval})" fi if [ \${CRM_alert_target_rc} = "0" ]; then CRM_alert_target_rc="" else CRM_alert_target_rc=" (target: \${CRM_alert_target_rc})" fi case \${CRM_alert_desc} in Cancelled) ;; *) echo "\${tstamp}Resource operation "\${CRM_alert_task}\${CRM_alert_interval}" for "\${CRM_alert_rsc}" on "\${CRM_alert_node}": \${CRM_alert_desc}\${CRM_alert_target_rc}" >> "\${CRM_alert_recipient}" if [ "\${CRM_alert_task}" = "stop" ] && [ "\${CRM_alert_desc}" = "Timed Out" ]; then echo "Executing recovering..." >> "\${CRM_alert_recipient}" pcs resource cleanup \${CRM_alert_rsc} fi ;; esac ;; *) echo "\${tstamp}Unhandled \$CRM_alert_kind alert" >> "\${CRM_alert_recipient}" env | grep CRM_alert >> "\${CRM_alert_recipient}" ;; esac EOF' sudochmod0755/var/lib/pacemaker/drbd_cleanup.sh sudotouch/var/log/pacemaker_drbd_file.log sudochownhacluster:haclient/var/log/pacemaker_drbd_file.log
Install MySQL on database2
- Connect to the
database2instance through SSH. Update the APT repository with the MySQL 5.6 package:
sudobash-c'cat <<EOF > /etc/apt/sources.list.d/mysql.list deb http://repo.mysql.com/apt/debian/ stretch mysql-5.6\ndeb-src http://repo.mysql.com/apt/debian/ stretch mysql-5.6 EOF'Add the GPG keys to the APT repository:
wget-O/tmp/RPM-GPG-KEY-mysqlhttps://repo.mysql.com/RPM-GPG-KEY-mysql sudoapt-keyadd/tmp/RPM-GPG-KEY-mysqlUpdate the package list:
sudoaptupdateInstall the MySQL server:
sudoapt-yinstallmysql-serverWhen prompted for a password, enter
DRBDha2.Stop the MySQL server:
sudo/etc/init.d/mysqlstopCreate the MySQL configuration file:
sudobash-c'cat <<EOF > /etc/mysql/mysql.conf.d/my.cnf [mysqld] bind-address = 0.0.0.0 # You may want to listen at localhost at the beginning datadir = /var/lib/mysql tmpdir = /srv/tmp user = mysql EOF'Remove data under
/var/lib/mysqland add a symbolic link to the mount point target for the replicated DRBD volume. The DRBD volume (/dev/drbd0) will be mounted at/srvonly when a failover occurs.sudorm-rf/var/lib/mysql sudoln-s/srv/mysql/var/lib/mysqlDisable automatic MySQL startup, which cluster resource management takes care of:
sudoupdate-rc.d-fmysqldisable
Install Pacemaker on database2
Load the metadata variables from the
.varsrcfile:source~/.varsrcInstall Pacemaker:
sudoapt-yinstallpcsMove the Corosync
authkeyfile that you copied before to/etc/corosync/:sudomv~/authkey/etc/corosync/ sudochownroot:/etc/corosync/authkey sudochmod400/etc/corosync/authkeyEnable
pcsd,corosync, andpacemakerat system start on the standby instance:sudoupdate-rc.d-fpcsdenable sudoupdate-rc.d-fcorosyncenable sudoupdate-rc.d-fpacemakerenableConfigure
corosyncto start beforepacemaker:sudoupdate-rc.dcorosyncdefaults2020 sudoupdate-rc.dpacemakerdefaults3010Set the cluster user password for authentication. The password is the same one (
haCLUSTER3) you used for thedatabase1instance.sudobash-c"echo hacluster:haCLUSTER3 | chpasswd"Create the
corosyncconfiguration file:sudobash-c"cat <<EOF > /etc/corosync/corosync.conf totem { version: 2 cluster_name: mysql_cluster transport: udpu interface { ringnumber: 0 Bindnetaddr: ${DATABASE2_INSTANCE_IP} broadcast: yes mcastport: 5405 } } quorum { provider: corosync_votequorum two_node: 1 } nodelist { node { ring0_addr: ${DATABASE1_INSTANCE_NAME} name: ${DATABASE1_INSTANCE_NAME} nodeid: 1 } node { ring0_addr: ${DATABASE2_INSTANCE_NAME} name: ${DATABASE2_INSTANCE_NAME} nodeid: 2 } } logging { to_logfile: yes logfile: /var/log/corosync/corosync.log timestamp: on } EOF"Create the Corosync service directory:
sudomkdir/etc/corosync/service.dConfigure
corosyncto be aware of Pacemaker:sudobash-c'cat <<EOF > /etc/corosync/service.d/pcmk service { name: pacemaker ver: 1 } EOF'Enable the
corosyncservice by default:sudobash-c'cat <<EOF > /etc/default/corosync # Path to corosync.conf COROSYNC_MAIN_CONFIG_FILE=/etc/corosync/corosync.conf # Path to authfile COROSYNC_TOTEM_AUTHKEY_FILE=/etc/corosync/authkey # Enable service by default START=yes EOF'Restart the
corosyncandpacemakerservices:sudoservicecorosyncrestart sudoservicepacemakerrestartInstall the Corosync quorum device package:
sudoapt-yinstallcorosync-qdeviceInstall a shell script to handle DRBD failure events:
sudobash-c'cat << 'EOF' > /var/lib/pacemaker/drbd_cleanup.sh #!/bin/sh if [ -z \$CRM_alert_version ]; then echo "\0ドル must be run by Pacemaker version 1.1.15 or later" exit 0 fi tstamp="\$CRM_alert_timestamp: " case \$CRM_alert_kind in resource) if [ \${CRM_alert_interval} = "0" ]; then CRM_alert_interval="" else CRM_alert_interval=" (\${CRM_alert_interval})" fi if [ \${CRM_alert_target_rc} = "0" ]; then CRM_alert_target_rc="" else CRM_alert_target_rc=" (target: \${CRM_alert_target_rc})" fi case \${CRM_alert_desc} in Cancelled) ;; *) echo "\${tstamp}Resource operation "\${CRM_alert_task}\${CRM_alert_interval}" for "\${CRM_alert_rsc}" on "\${CRM_alert_node}": \${CRM_alert_desc}\${CRM_alert_target_rc}" >> "\${CRM_alert_recipient}" if [ "\${CRM_alert_task}" = "stop" ] && [ "\${CRM_alert_desc}" = "Timed Out" ]; then echo "Executing recovering..." >> "\${CRM_alert_recipient}" pcs resource cleanup \${CRM_alert_rsc} fi ;; esac ;; *) echo "\${tstamp}Unhandled \$CRM_alert_kind alert" >> "\${CRM_alert_recipient}" env | grep CRM_alert >> "\${CRM_alert_recipient}" ;; esac EOF' sudochmod0755/var/lib/pacemaker/drbd_cleanup.sh sudotouch/var/log/pacemaker_drbd_file.log sudochownhacluster:haclient/var/log/pacemaker_drbd_file.logCheck the Corosync cluster status:
sudocorosync-cmapctl|grep"members...ip"The output looks like this:
runtime.totem.pg.mrp.srp.members.1.ip (str) = r(0) ip(10.140.0.2) runtime.totem.pg.mrp.srp.members.2.ip (str) = r(0) ip(10.140.0.3)
Starting the cluster
- Connect to the
database2instance through SSH. Load the metadata variables from the
.varsrcfile:source~/.varsrcAuthenticate against the cluster nodes:
sudopcsclusterauth--namemysql_cluster${DATABASE1_INSTANCE_NAME}${DATABASE2_INSTANCE_NAME}-uhacluster-phaCLUSTER3Start the cluster:
sudopcsclusterstart--allVerify the cluster status:
sudopcsstatusThe output looks like this:
Cluster name: mysql_cluster WARNING: no stonith devices and stonith-enabled is not false Stack: corosync Current DC: database2 (version 1.1.16-94ff4df) - partition with quorum Last updated: Sat Nov 3 07:24:53 2018 Last change: Sat Nov 3 07:17:17 2018 by hacluster via crmd on database2 2 nodes configured 0 resources configured Online: [ database1 database2 ] No resources Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled
Configuring Pacemaker to manage cluster resources
Next, you configure Pacemaker with the DRBD, disk, MySQL, and quorum resources.
- Connect to the
database1instance through SSH. Use the Pacemaker
pcsutility to queue several changes into a file and later push those changes to the Cluster Information Base (CIB) atomically:sudopcsclustercibclust_cfgDisable STONITH, because you'll deploy the quorum device later:
sudopcs-fclust_cfgpropertysetstonith-enabled=falseDisable the quorum-related settings. You'll set up the quorum device node later.
sudopcs-fclust_cfgpropertysetno-quorum-policy=stopPrevent Pacemaker from moving back resources after a recovery:
sudopcs-fclust_cfgresourcedefaultsresource-stickiness=200Create the DRBD resource in the cluster:
sudopcs-fclust_cfgresourcecreatemysql_drbdocf:linbit:drbd\ drbd_resource=r0\ opmonitorrole=Masterinterval=110timeout=30\ opmonitorrole=Slaveinterval=120timeout=30\ opstarttimeout=120\ opstoptimeout=60Make sure that only one primary role is assigned to the DRBD resource:
sudopcs-fclust_cfgresourcemasterprimary_mysqlmysql_drbd\ master-max=1master-node-max=1\ clone-max=2clone-node-max=1\ notify=trueCreate the file system resource to mount the DRBD disk:
sudopcs-fclust_cfgresourcecreatemystore_FSFilesystem\ device="/dev/drbd0"\ directory="/srv"\ fstype="ext4"Configure the cluster to colocate the DRBD resource with the disk resource on the same VM:
sudopcs-fclust_cfgconstraintcolocationaddmystore_FSwithprimary_mysqlINFINITYwith-rsc-role=MasterConfigure the cluster to bring up the disk resource only after the DRBD primary is promoted:
sudopcs-fclust_cfgconstraintorderpromoteprimary_mysqlthenstartmystore_FSCreate a MySQL service:
sudopcs-fclust_cfgresourcecreatemysql_serviceocf:heartbeat:mysql\ binary="/usr/bin/mysqld_safe"\ config="/etc/mysql/my.cnf"\ datadir="/var/lib/mysql"\ pid="/var/run/mysqld/mysql.pid"\ socket="/var/run/mysqld/mysql.sock"\ additional_parameters="--bind-address=0.0.0.0"\ opstarttimeout=60s\ opstoptimeout=60s\ opmonitorinterval=20stimeout=30sConfigure the cluster to colocate the MySQL resource with the disk resource on the same VM:
sudopcs-fclust_cfgconstraintcolocationaddmysql_servicewithmystore_FSINFINITYEnsure that the DRBD file system precedes the MySQL service in the startup order:
sudopcs-fclust_cfgconstraintordermystore_FSthenmysql_serviceCreate the alert agent, and add the patch to the log file as its recipient:
sudopcs-fclust_cfgalertcreateid=drbd_cleanup_filedescription="Monitor DRBD events and perform post cleanup"path=/var/lib/pacemaker/drbd_cleanup.sh sudopcs-fclust_cfgalertrecipientadddrbd_cleanup_fileid=logfilevalue=/var/log/pacemaker_drbd_file.logCommit the changes to the cluster:
sudopcsclustercib-pushclust_cfgVerify that all the resources are online:
sudopcsstatusThe output looks like this:
Online: [ database1 database2 ] Full list of resources: Master/Slave Set: primary_mysql [mysql_drbd] Masters: [ database1 ] Slaves: [ database2 ] mystore_FS (ocf::heartbeat:Filesystem): Started database1 mysql_service (ocf::heartbeat:mysql): Started database1
Configuring a quorum device
- Connect to the
qdeviceinstance through SSH. Install
pcsandcorosync-qnetd:sudoaptupdate && sudoapt-yinstallpcscorosync-qnetdStart the Pacemaker or Corosync configuration system daemon (
pcsd) service and enable it on system start:sudoservicepcsdstart sudoupdate-rc.dpcsdenableSet the cluster user password (
haCLUSTER3) for authentication:sudobash-c"echo hacluster:haCLUSTER3 | chpasswd"Check the quorum device status:
sudopcsqdevicestatusnet--fullThe output looks like this:
QNetd address: *:5403 TLS: Supported (client certificate required) Connected clients: 0 Connected clusters: 0 Maximum send/receive size: 32768/32768 bytes
Configure the quorum device settings on database1
- Connect to the
database1node through SSH. Load the metadata variables from the
.varsrcfile:source~/.varsrcAuthenticate the quorum device node for the cluster:
sudopcsclusterauth--namemysql_cluster${QUORUM_INSTANCE_NAME}-uhacluster-phaCLUSTER3Add the quorum device to the cluster. Use the
ffsplitalgorithm, which ensures that the active node will be decided based on 50% of the votes or more:sudopcsquorumdeviceaddmodelnethost=${QUORUM_INSTANCE_NAME}algorithm=ffsplitAdd the quorum setting to
corosync.conf:sudobash-c"cat <<EOF > /etc/corosync/corosync.conf totem { version: 2 cluster_name: mysql_cluster transport: udpu interface { ringnumber: 0 Bindnetaddr: ${DATABASE1_INSTANCE_IP} broadcast: yes mcastport: 5405 } } quorum { provider: corosync_votequorum device { votes: 1 model: net net { tls: on host: ${QUORUM_INSTANCE_NAME} algorithm: ffsplit } } } nodelist { node { ring0_addr: ${DATABASE1_INSTANCE_NAME} name: ${DATABASE1_INSTANCE_NAME} nodeid: 1 } node { ring0_addr: ${DATABASE2_INSTANCE_NAME} name: ${DATABASE2_INSTANCE_NAME} nodeid: 2 } } logging { to_logfile: yes logfile: /var/log/corosync/corosync.log timestamp: on } EOF"Restart the
corosyncservice to reload the new quorum device setting:sudoservicecorosyncrestartStart the
corosyncquorum device daemon and bring it up at system start:sudoservicecorosync-qdevicestart sudoupdate-rc.dcorosync-qdevicedefaults
Configure the quorum device settings on database2
- Connect to the
database2node through SSH. Load the metadata variables from the
.varsrcfile:source~/.varsrcAdd a quorum setting to
corosync.conf:sudobash-c"cat <<EOF > /etc/corosync/corosync.conf totem { version: 2 cluster_name: mysql_cluster transport: udpu interface { ringnumber: 0 Bindnetaddr: ${DATABASE2_INSTANCE_IP} broadcast: yes mcastport: 5405 } } quorum { provider: corosync_votequorum device { votes: 1 model: net net { tls: on host: ${QUORUM_INSTANCE_NAME} algorithm: ffsplit } } } nodelist { node { ring0_addr: ${DATABASE1_INSTANCE_NAME} name: ${DATABASE1_INSTANCE_NAME} nodeid: 1 } node { ring0_addr: ${DATABASE2_INSTANCE_NAME} name: ${DATABASE2_INSTANCE_NAME} nodeid: 2 } } logging { to_logfile: yes logfile: /var/log/corosync/corosync.log timestamp: on } EOF"Restart the
corosyncservice to reload the new quorum device setting:sudoservicecorosyncrestartStart the Corosync quorum device daemon and configure it to bring it up at system start:
sudoservicecorosync-qdevicestart sudoupdate-rc.dcorosync-qdevicedefaults
Verifying the cluster status
The next step is to verify that the cluster resources are online.
- Connect to the
database1instance through SSH. Verify the cluster status:
sudopcsstatusThe output looks like this:
Cluster name: mysql_cluster Stack: corosync Current DC: database1 (version 1.1.16-94ff4df) - partition with quorum Last updated: Sun Nov 4 01:49:18 2018 Last change: Sat Nov 3 15:48:21 2018 by root via cibadmin on database1 2 nodes configured 4 resources configured Online: [ database1 database2 ] Full list of resources: Master/Slave Set: primary_mysql [mysql_drbd] Masters: [ database1 ] Slaves: [ database2 ] mystore_FS (ocf::heartbeat:Filesystem): Started database1 mysql_service (ocf::heartbeat:mysql): Started database1 Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled
Show the quorum status:
sudopcsquorumstatusThe output looks like this:
Quorum information ------------------ Date: Sun Nov 4 01:48:25 2018 Quorum provider: corosync_votequorum Nodes: 2 Node ID: 1 Ring ID: 1/24 Quorate: Yes Votequorum information ---------------------- Expected votes: 3 Highest expected: 3 Total votes: 3 Quorum: 2 Flags: Quorate Qdevice Membership information ---------------------- Nodeid Votes Qdevice Name 1 1 A,V,NMW database1 (local) 2 1 A,V,NMW database2 0 1 Qdevice
Show the quorum device status:
sudopcsquorumdevicestatusThe output looks like this:
Qdevice information ------------------- Model: Net Node ID: 1 Configured node list: 0 Node ID = 1 1 Node ID = 2 Membership node list: 1, 2 Qdevice-net information ---------------------- Cluster name: mysql_cluster QNetd host: qdevice:5403 Algorithm: Fifty-Fifty split Tie-breaker: Node with lowest node ID State: Connected
Configuring an internal load balancer as the cluster IP
Open Cloud Shell:
Create an unmanaged instance group and add the
database1instance to it:gcloud compute instance-groups unmanaged create ${DATABASE1_INSTANCE_NAME}-instance-group \ --zone=${DATABASE1_INSTANCE_ZONE} \ --description="${DATABASE1_INSTANCE_NAME} unmanaged instance group" gcloud compute instance-groups unmanaged add-instances ${DATABASE1_INSTANCE_NAME}-instance-group \ --zone=${DATABASE1_INSTANCE_ZONE} \ --instances=${DATABASE1_INSTANCE_NAME}Create an unmanaged instance group and add the
database2instance to it:gcloud compute instance-groups unmanaged create ${DATABASE2_INSTANCE_NAME}-instance-group \ --zone=${DATABASE2_INSTANCE_ZONE} \ --description="${DATABASE2_INSTANCE_NAME} unmanaged instance group" gcloud compute instance-groups unmanaged add-instances ${DATABASE2_INSTANCE_NAME}-instance-group \ --zone=${DATABASE2_INSTANCE_ZONE} \ --instances=${DATABASE2_INSTANCE_NAME}Create a health check for
port 3306:gcloud compute health-checks create tcp mysql-backend-healthcheck \ --port 3306Create a regional internal backend service:
gcloud compute backend-services create mysql-ilb \ --load-balancing-scheme internal \ --region ${CLUSTER_REGION} \ --health-checks mysql-backend-healthcheck \ --protocol tcpAdd the two instance groups as backends to the backend service:
gcloud compute backend-services add-backend mysql-ilb \ --instance-group ${DATABASE1_INSTANCE_NAME}-instance-group \ --instance-group-zone ${DATABASE1_INSTANCE_ZONE} \ --region ${CLUSTER_REGION} gcloud compute backend-services add-backend mysql-ilb \ --instance-group ${DATABASE2_INSTANCE_NAME}-instance-group \ --instance-group-zone ${DATABASE2_INSTANCE_ZONE} \ --region ${CLUSTER_REGION}Create a forwarding rule for the load balancer:
gcloud compute forwarding-rules create mysql-ilb-forwarding-rule \ --load-balancing-scheme internal \ --ports 3306 \ --network default \ --subnet default \ --region ${CLUSTER_REGION} \ --address ${ILB_IP} \ --backend-service mysql-ilbCreate a firewall rule to allow an internal load balancer health check:
gcloud compute firewall-rules create allow-ilb-healthcheck \ --direction=INGRESS --network=default \ --action=ALLOW --rules=tcp:3306 \ --source-ranges=130.211.0.0/22,35.191.0.0/16 --target-tags=mysqlTo check the status of your load balancer, go to the Load balancing page in the Google Cloud console.
Click
mysql-ilb:image
Because the cluster allows only one instance to run MySQL at any given time, only one instance is healthy from the internal load balancer's perspective.
image
Connecting to the cluster from the MySQL client
- Connect to the
mysql-clientinstance through SSH. Update the package definitions:
sudoapt-getupdateInstall the MySQL client:
sudoapt-getinstall-ymysql-clientCreate a script file that creates and populates a table with sample data:
cat<<EOF > db_creation.sql CREATEDATABASEsource_db; usesource_db; CREATETABLEsource_table ( idBIGINTNOTNULLAUTO_INCREMENT, timestamptimestampNOTNULLDEFAULTCURRENT_TIMESTAMP, event_datafloatDEFAULTNULL, PRIMARYKEY(id) ); DELIMITER$$ CREATEPROCEDUREsimulate_data() BEGIN DECLAREiINTDEFAULT0; WHILEi < 100DO INSERTINTOsource_table(event_data)VALUES(ROUND(RAND()*15000,2)); SETi=i+1; ENDWHILE; END$$ DELIMITER; CALLsimulate_data() EOFCreate the table:
ILB_IP=$(curl-s"http://metadata.google.internal/computeMetadata/v1/instance/attributes/ILB_IP"-H"Metadata-Flavor: Google") mysql-uroot-pDRBDha2"-h${ILB_IP}" < db_creation.sql
Testing the cluster
To test the HA capability of the deployed cluster, you can perform the following tests:
- Shut down the
database1instance to test whether the primary database can fail over to thedatabase2instance. - Start the
database1instance to see ifdatabase1can successfully rejoin the cluster. - Shut down the
database2instance to test whether the primary database can fail over to thedatabase1instance. - Start the
database2instance to see whetherdatabase2can successfully rejoin the cluster and whetherdatabase1instance still keeps the primary role. - Create a network partition between
database1anddatabase2to simulate a split-brain issue.
Open Cloud Shell:
Stop the
database1instance:gcloud compute instances stop ${DATABASE1_INSTANCE_NAME} \ --zone=${DATABASE1_INSTANCE_ZONE}Check the status of the cluster:
gcloud compute ssh ${DATABASE2_INSTANCE_NAME} \ --zone=${DATABASE2_INSTANCE_ZONE} \ --command="sudo pcs status"The output looks like the following. Verify that the configuration changes you made took place:
2 nodes configured 4 resources configured Online: [ database2 ] OFFLINE: [ database1 ] Full list of resources: Master/Slave Set: primary_mysql [mysql_drbd] Masters: [ database2 ] Stopped: [ database1 ] mystore_FS (ocf::heartbeat:Filesystem): Started database2 mysql_service (ocf::heartbeat:mysql): Started database2 Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled
Start the
database1instance:gcloud compute instances start ${DATABASE1_INSTANCE_NAME} \ --zone=${DATABASE1_INSTANCE_ZONE}Check the status of the cluster:
gcloud compute ssh ${DATABASE1_INSTANCE_NAME} \ --zone=${DATABASE1_INSTANCE_ZONE} \ --command="sudo pcs status"The output looks like this:
2 nodes configured 4 resources configured Online: [ database1 database2 ] Full list of resources: Master/Slave Set: primary_mysql [mysql_drbd] Masters: [ database2 ] Slaves: [ database1 ] mystore_FS (ocf::heartbeat:Filesystem): Started database2 mysql_service (ocf::heartbeat:mysql): Started database2 Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled
Stop the
database2instance:gcloud compute instances stop ${DATABASE2_INSTANCE_NAME} \ --zone=${DATABASE2_INSTANCE_ZONE}Check the status of the cluster:
gcloud compute ssh ${DATABASE1_INSTANCE_NAME} \ --zone=${DATABASE1_INSTANCE_ZONE} \ --command="sudo pcs status"The output looks like this:
2 nodes configured 4 resources configured Online: [ database1 ] OFFLINE: [ database2 ] Full list of resources: Master/Slave Set: primary_mysql [mysql_drbd] Masters: [ database1 ] Stopped: [ database2 ] mystore_FS (ocf::heartbeat:Filesystem): Started database1 mysql_service (ocf::heartbeat:mysql): Started database1 Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled
Start the
database2instance:gcloud compute instances start ${DATABASE2_INSTANCE_NAME} \ --zone=${DATABASE2_INSTANCE_ZONE}Check the status of the cluster:
gcloud compute ssh ${DATABASE1_INSTANCE_NAME} \ --zone=${DATABASE1_INSTANCE_ZONE} \ --command="sudo pcs status"The output looks like this:
2 nodes configured 4 resources configured Online: [ database1 database2 ] Full list of resources: Master/Slave Set: primary_mysql [mysql_drbd] Masters: [ database1 ] Slaves: [ database2 ] mystore_FS (ocf::heartbeat:Filesystem): Started database1 mysql_service (ocf::heartbeat:mysql): Started database1 Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled
Create a network partition between
database1anddatabase2:gcloud compute firewall-rules create block-comms \ --description="no MySQL communications" \ --action=DENY \ --rules=all \ --source-tags=mysql \ --target-tags=mysql \ --priority=800After a couple of minutes, check the status of the cluster. Note how
database1keeps its primary role, because the quorum policy is the lowest ID node first under network partition situation. In the meantime, thedatabase2MySQL service is stopped. This quorum mechanism avoids the split-brain issue when the network partition occurs.gcloud compute ssh ${DATABASE1_INSTANCE_NAME} \ --zone=${DATABASE1_INSTANCE_ZONE} \ --command="sudo pcs status"The output looks like this:
2 nodes configured 4 resources configured Online: [ database1 ] OFFLINE: [ database2 ] Full list of resources: Master/Slave Set: primary_mysql [mysql_drbd] Masters: [ database1 ] Stopped: [ database2 ] mystore_FS (ocf::heartbeat:Filesystem): Started database1 mysql_service (ocf::heartbeat:mysql): Started database1
Delete the network firewall rule to remove the network partition. (Press
Ywhen prompted.)gcloud compute firewall-rules delete block-commsVerify that the cluster status is back to normal:
gcloud compute ssh ${DATABASE1_INSTANCE_NAME} \ --zone=${DATABASE1_INSTANCE_ZONE} \ --command="sudo pcs status"The output looks like this:
2 nodes configured 4 resources configured Online: [ database1 database2 ] Full list of resources: Master/Slave Set: primary_mysql [mysql_drbd] Masters: [ database1 ] Slaves: [ database2 ] mystore_FS (ocf::heartbeat:Filesystem): Started database1 mysql_service (ocf::heartbeat:mysql): Started database1
Connect to the
mysql-clientinstance through SSH.In your shell, query the table you created previously:
ILB_IP=$(curl-s"http://metadata.google.internal/computeMetadata/v1/instance/attributes/ILB_IP"-H"Metadata-Flavor: Google") mysql-uroot"-h${ILB_IP}"-pDRBDha2-e"select * from source_db.source_table LIMIT 10"The output should list 10 records of the following form, verifying the data consistency in the cluster:
+----+---------------------+------------+ | id | timestamp | event_data | +----+---------------------+------------+ | 1 | 2018年11月27日 21:00:09 | 1279.06 | | 2 | 2018年11月27日 21:00:09 | 4292.64 | | 3 | 2018年11月27日 21:00:09 | 2626.01 | | 4 | 2018年11月27日 21:00:09 | 252.13 | | 5 | 2018年11月27日 21:00:09 | 8382.64 | | 6 | 2018年11月27日 21:00:09 | 11156.8 | | 7 | 2018年11月27日 21:00:09 | 636.1 | | 8 | 2018年11月27日 21:00:09 | 14710.1 | | 9 | 2018年11月27日 21:00:09 | 11642.1 | | 10 | 2018年11月27日 21:00:09 | 14080.3 | +----+---------------------+------------+
Failover sequence
If the primary node in the cluster goes down, the failover sequence looks like this:
- Both the quorum device and the standby node lose connectivity with the primary node.
- The quorum device votes for the standby node, and the standby node votes for itself.
- Quorum is acquired by the standby node.
- The standby node is promoted to primary.
- The new primary node does the following:
- Promotes DRBD to primary
- Mounts the MySQL data disk from DRBD
- Starts MySQL
- Becomes healthy for the load balancer
- The load balancer starts sending traffic to the new primary node.
Clean up
Delete the project
- In the Google Cloud console, go to the Manage resources page.
- In the project list, select the project that you want to delete, and then click Delete.
- In the dialog, type the project ID, and then click Shut down to delete the project.
What's next
- Read more about DRBD.
- Read more about Pacemaker.
- Read more about Corosync Cluster Engine.
- For more advanced MySQL server 5.6 settings, see MySQL server administration manual.
- If you want to set up remote access to MySQL, see How to set up remote access to MySQL on Compute Engine.
- For more information about MySQL, see the official MySQL documentation.
- Explore reference architectures, diagrams, and best practices about Google Cloud. Take a look at our Cloud Architecture Center.