Manage Persistent Disk Asynchronous Replication
Stay organized with collections
Save and categorize content based on your preferences.
This document describes how to start and stop Asynchronous Replication.
Asynchronous Replication is useful for low-RPO, low-RTO disaster recovery. To learn more about asynchronous replication, see About Asynchronous Replication.
Limitations
- A primary disk can only replicate to one secondary disk at a time.
- After replication stops, you can't resume replication to the same disk. You must create a new secondary disk and restart replication.
- Secondary disks can't be attached, deleted, or snapshotted while they are in the process of replication.
- If you use a regional disk as a secondary disk and a zonal outage occurs in one of the secondary disk's zones, replication from the primary disk to the secondary disk fails.
Before you begin
- If you need to align replication across multiple disks, create a consistency group.
- Create a primary disk.
- Create a secondary disk.
-
If you haven't already, set up authentication.
Authentication verifies your identity for access to Google Cloud services and APIs. To run
code or samples from a local development environment, you can authenticate to
Compute Engine by selecting one of the following options:
Select the tab for how you plan to use the samples on this page:
Console
When you use the Google Cloud console to access Google Cloud services and APIs, you don't need to set up authentication.
gcloud
-
Install the Google Cloud CLI. After installation, initialize the Google Cloud CLI by running the following command:
gcloudinit
If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.
- Set a default region and zone.
Terraform
To use the Terraform samples on this page in a local development environment, install and initialize the gcloud CLI, and then set up Application Default Credentials with your user credentials.
Install the Google Cloud CLI.
If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.
If you're using a local shell, then create local authentication credentials for your user account:
gcloudauthapplication-defaultlogin
You don't need to do this if you're using Cloud Shell.
If an authentication error is returned, and you are using an external identity provider (IdP), confirm that you have signed in to the gcloud CLI with your federated identity.
For more information, see Set up authentication for a local development environment.
REST
To use the REST API samples on this page in a local development environment, you use the credentials you provide to the gcloud CLI.
Install the Google Cloud CLI.
If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.
For more information, see Authenticate for using REST in the Google Cloud authentication documentation.
-
Start replication
Start replication using the Google Cloud console, Google Cloud CLI, REST, or Terraform.
Console
In the Google Cloud console, go to the Asynchronous replication page.
Click the name of the secondary disk that you want to start replication to.
Click Start replication. The Start replication window opens.
Click Start replication.
gcloud
Start replication using the
gcloud compute disks start-async-replication
command:
gcloud compute disks start-async-replication PRIMARY_DISK_NAME \ --PRIMARY_LOCATION_FLAG=PRIMARY_LOCATION \ --secondary-disk=SECONDARY_DISK_NAME \ --SECONDARY_LOCATION_FLAG=SECONDARY_LOCATION \ --secondary-disk-project=SECONDARY_PROJECT
Replace the following:
PRIMARY_DISK_NAME
: the name of the primary disk.PRIMARY_LOCATION_FLAG
: the location flag for the primary disk. For regional disks, use--region
. For zonal disks, use--zone
.PRIMARY_LOCATION
: The primary disk's region or zone. For regional disks, use the region. For zonal disks, use the zone.SECONDARY_DISK_NAME
: the name of the secondary disk.SECONDARY_LOCATION_FLAG
: the location flag for the secondary disk. For regional disks, use--secondary-disk-region
. For zonal disks, use--secondary-disk-zone
.SECONDARY_LOCATION
: the secondary disk's region or zone. For regional disks, use the region. For zonal disks, use the zone.SECONDARY_PROJECT
: the project that contains the secondary disk.
Go
import(
"context"
"fmt"
"io"
compute"cloud.google.com/go/compute/apiv1"
computepb"cloud.google.com/go/compute/apiv1/computepb"
)
// startReplication starts disk replication in a project for a given zone.
funcstartReplication(
wio.Writer,
projectID,zone,diskName,primaryDiskName,primaryZonestring,
)error{
// projectID := "your_project_id"
// zone := "europe-west4-b"
// diskName := "your_disk_name"
// primaryDiskName := "your_disk_name2"
// primaryZone := "europe-west2-b"
ctx:=context.Background()
disksClient,err:=compute.NewDisksRESTClient(ctx)
iferr!=nil{
returnfmt.Errorf("NewDisksRESTClient: %w",err)
}
deferdisksClient.Close()
secondaryFullDiskName:=fmt.Sprintf("projects/%s/zones/%s/disks/%s",projectID,zone,diskName)
req:=&computepb.StartAsyncReplicationDiskRequest{
Project:projectID,
Zone:primaryZone,
Disk:primaryDiskName,
DisksStartAsyncReplicationRequestResource:&computepb.DisksStartAsyncReplicationRequest{
AsyncSecondaryDisk:&secondaryFullDiskName,
},
}
op,err:=disksClient.StartAsyncReplication(ctx,req)
iferr!=nil{
returnfmt.Errorf("unable to create disk: %w",err)
}
iferr=op.Wait(ctx);err!=nil{
returnfmt.Errorf("unable to wait for the operation: %w",err)
}
fmt.Fprintf(w,"Replication started\n")
returnnil
}
Java
importcom.google.cloud.compute.v1.DisksClient;
importcom.google.cloud.compute.v1.DisksStartAsyncReplicationRequest;
importcom.google.cloud.compute.v1.Operation;
importcom.google.cloud.compute.v1.Operation.Status;
importcom.google.cloud.compute.v1.StartAsyncReplicationDiskRequest;
importjava.io.IOException;
importjava.util.concurrent.ExecutionException;
importjava.util.concurrent.TimeUnit;
importjava.util.concurrent.TimeoutException;
publicclass StartZonalDiskReplication{
publicstaticvoidmain(String[]args)
throwsIOException,ExecutionException,InterruptedException,TimeoutException{
// TODO(developer): Replace these variables before running the sample.
// The project that contains the primary disk.
StringprojectId="YOUR_PROJECT_ID";
// Name of the primary disk.
StringprimaryDiskName="PRIMARY_DISK_NAME";
// Name of the secondary disk.
StringsecondaryDiskName="SECONDARY_DISK_NAME";
// Name of the zone in which your primary disk is located.
// Learn more about zones and regions:
// https://cloud.google.com/compute/docs/disks/async-pd/about#supported_region_pairs
StringprimaryDiskLocation="us-central1-a";
// Name of the zone in which your secondary disk is located.
StringsecondaryDiskLocation="us-east1-b";
startZonalDiskAsyncReplication(projectId,primaryDiskName,primaryDiskLocation,
secondaryDiskName,secondaryDiskLocation);
}
// Starts asynchronous replication for the specified zonal disk.
publicstaticStatusstartZonalDiskAsyncReplication(StringprojectId,StringprimaryDiskName,
StringprimaryDiskLocation,StringsecondaryDiskName,StringsecondaryDiskLocation)
throwsIOException,ExecutionException,InterruptedException,TimeoutException{
StringsecondaryDiskPath=String.format("projects/%s/zones/%s/disks/%s",
projectId,secondaryDiskLocation,secondaryDiskName);
// Initialize client that will be used to send requests. This client only needs to be created
// once, and can be reused for multiple requests.
try(DisksClientdisksClient=DisksClient.create()){
DisksStartAsyncReplicationRequestdiskRequest=
DisksStartAsyncReplicationRequest.newBuilder()
.setAsyncSecondaryDisk(secondaryDiskPath)
.build();
StartAsyncReplicationDiskRequestrequest=
StartAsyncReplicationDiskRequest.newBuilder()
.setDisk(primaryDiskName)
.setDisksStartAsyncReplicationRequestResource(diskRequest)
.setProject(projectId)
.setZone(primaryDiskLocation)
.build();
Operationresponse=disksClient.startAsyncReplicationAsync(request).get(1,TimeUnit.MINUTES);
if(response.hasError()){
thrownewError("Error starting replication! "+response.getError());
}
returnresponse.getStatus();
}
}
}
Node.js
// Import the Compute library
constcomputeLib=require('@google-cloud/compute');
constcompute=computeLib.protos.google.cloud.compute.v1;
// Instantiate a diskClient
constdisksClient=newcomputeLib.DisksClient();
// Instantiate a zoneOperationsClient
constzoneOperationsClient=newcomputeLib.ZoneOperationsClient();
/**
* TODO(developer): Update/uncomment these variables before running the sample.
*/
// The project of the secondary disk.
constsecondaryProjectId=awaitdisksClient.getProjectId();
// The zone of the secondary disk.
// secondaryLocation = 'us-central1-a';
// The name of the secondary disk.
// secondaryDiskName = 'secondary-disk-name';
// The project of the primary disk.
constprimaryProjectId=awaitdisksClient.getProjectId();
// The zone of the primary disk.
// primaryLocation = 'us-central1-a';
// The name of the primary disk.
// primaryDiskName = 'primary-disk-name';
// Start replication
asyncfunctioncallStartReplication(){
const[response]=awaitdisksClient.startAsyncReplication({
project:secondaryProjectId,
zone:primaryLocation,
disk:primaryDiskName,
disksStartAsyncReplicationRequestResource:
newcompute.DisksStartAsyncReplicationRequest({
asyncSecondaryDisk:`projects/${primaryProjectId}/zones/${secondaryLocation}/disks/${secondaryDiskName}`,
}),
});
letoperation=response.latestResponse;
// Wait for the operation to complete.
while(operation.status!=='DONE'){
[operation]=awaitzoneOperationsClient.wait({
operation:operation.name,
project:secondaryProjectId,
zone:operation.zone.split('/').pop(),
});
}
console.log(
`Data replication from primary disk: ${primaryDiskName} to secondary disk: ${secondaryDiskName} started.`
);
}
awaitcallStartReplication();
Python
from__future__import annotations
importsys
fromtypingimport Any
fromgoogle.api_core.extended_operationimport ExtendedOperation
fromgoogle.cloudimport compute_v1
defwait_for_extended_operation(
operation: ExtendedOperation, verbose_name: str = "operation", timeout: int = 300
) -> Any:
"""
Waits for the extended (long-running) operation to complete.
If the operation is successful, it will return its result.
If the operation ends with an error, an exception will be raised.
If there were any warnings during the execution of the operation
they will be printed to sys.stderr.
Args:
operation: a long-running operation you want to wait on.
verbose_name: (optional) a more verbose name of the operation,
used only during error and warning reporting.
timeout: how long (in seconds) to wait for operation to finish.
If None, wait indefinitely.
Returns:
Whatever the operation.result() returns.
Raises:
This method will raise the exception received from `operation.exception()`
or RuntimeError if there is no exception set, but there is an `error_code`
set for the `operation`.
In case of an operation taking longer than `timeout` seconds to complete,
a `concurrent.futures.TimeoutError` will be raised.
"""
result = operation.result(timeout=timeout)
if operation.error_code:
print(
f"Error during {verbose_name}: [Code: {operation.error_code}]: {operation.error_message}",
file=sys.stderr,
flush=True,
)
print(f"Operation ID: {operation.name}", file=sys.stderr, flush=True)
raise operation.exception() or RuntimeError(operation.error_message)
if operation.warnings:
print(f"Warnings during {verbose_name}:\n", file=sys.stderr, flush=True)
for warning in operation.warnings:
print(f" - {warning.code}: {warning.message}", file=sys.stderr, flush=True)
return result
defstart_disk_replication(
project_id: str,
primary_disk_location: str,
primary_disk_name: str,
secondary_disk_location: str,
secondary_disk_name: str,
) -> bool:
"""Starts the asynchronous replication of a primary disk to a secondary disk.
Args:
project_id (str): The ID of the Google Cloud project.
primary_disk_location (str): The location of the primary disk, either a zone or a region.
primary_disk_name (str): The name of the primary disk.
secondary_disk_location (str): The location of the secondary disk, either a zone or a region.
secondary_disk_name (str): The name of the secondary disk.
Returns:
bool: True if the replication was successfully started.
"""
# Check if the primary disk location is a region or a zone.
if primary_disk_location[-1].isdigit():
region_client = compute_v1.RegionDisksClient()
request_resource = compute_v1.RegionDisksStartAsyncReplicationRequest(
async_secondary_disk=f"projects/{project_id}/regions/{secondary_disk_location}/disks/{secondary_disk_name}"
)
operation = region_client.start_async_replication(
project=project_id,
region=primary_disk_location,
disk=primary_disk_name,
region_disks_start_async_replication_request_resource=request_resource,
)
else:
client = compute_v1.DisksClient()
request_resource = compute_v1.DisksStartAsyncReplicationRequest(
async_secondary_disk=f"zones/{secondary_disk_location}/disks/{secondary_disk_name}"
)
operation = client.start_async_replication(
project=project_id,
zone=primary_disk_location,
disk=primary_disk_name,
disks_start_async_replication_request_resource=request_resource,
)
wait_for_extended_operation(operation, verbose_name="replication operation")
print(f"Replication for disk {primary_disk_name} started.")
return True
REST
Start replication using one of the following methods:
Start replication for zonal disks using the
disks.startAsyncReplication
method:POST https://compute.googleapis.com/compute/v1/projects/PRIMARY_DISK_PROJECT/zones/PRIMARY_LOCATION/disks/PRIMARY_DISK_NAME/startAsyncReplication { "asyncSecondaryDisk": "projects/SECONDARY_DISK_PROJECT/SECONDARY_LOCATION_PARAMETER/SECONDARY_LOCATION/disks/SECONDARY_DISK_NAME" }
Start replication for regional disks using the
regionDisks.startAsyncReplication
method:POST https://compute.googleapis.com/compute/v1/projects/PRIMARY_DISK_PROJECT/regions/PRIMARY_LOCATION/regionDisks/PRIMARY_DISK_NAME/startAsyncReplication { "asyncSecondaryDisk": "projects/SECONDARY_DISK_PROJECT/SECONDARY_LOCATION_PARAMETER/SECONDARY_LOCATION/disks/SECONDARY_DISK_NAME" }
Replace the following:
PRIMARY_DISK_PROJECT
: the project that contains the primary disk.PRIMARY_LOCATION
: The primary disk's region or zone. For regional disks, use the region. For zonal disks, use the zone.PRIMARY_DISK_NAME
: the name of the primary disk.SECONDARY_DISK_PROJECT
: the project that contains the secondary disk.SECONDARY_LOCATION_PARAMETER
: the location parameter for the secondary disk. For regional disks, useregions
. For zonal disks, usezones
.SECONDARY_LOCATION
: The secondary disk's region or zone. For regional disks, use the region. For zonal disks, use the zone.SECONDARY_DISK_NAME
: the name of the secondary disk.
Terraform
To start replication between primary and secondary disks, use the compute_disk_async_replication
resource.
resource "google_compute_disk_async_replication" "default" {
primary_disk = google_compute_disk.primary_disk.id
secondary_disk {
disk = google_compute_disk.secondary_disk.id
}
}
To learn how to apply or remove a Terraform configuration, see Basic Terraform commands.
Stop replication
You can stop replication for a single primary or secondary disk, or for all disks in a consistency group. If you stop replication for a single disk in a consistency group, the replication time for that disk becomes out of sync with the other disks in the consistency group.
Stopping replication is performed in failover and failback scenarios. If you stop replication, you can't restart replication to the same secondary disk. If you want to restart replication, you must create a new secondary disk and start again.
When you stop replication on a disk, the disk's replication state changes to
STOPPED
. The replication state of the other disk in the disk's replication
pair (the corresponding primary or secondary disk) updates to
STOPPED
at a later time. If you want to avoid the time gap and update the
replication state of the other disk to STOPPED
immediately, you must manually
stop replication on the other disk as well. Stopping replication on both disks
doesn't affect the time at which replication stops, it only
affects the disks' replication states.
Stop replication for a single disk
Stop replication for a single disk using the Google Cloud console, the Google Cloud CLI, or REST.
Console
Stop replication by doing the following:
In the Google Cloud console, go to the Asynchronous replication page.
Click the name of primary or secondary disk for which you want to stop replication. The Manage disk page opens.
Click Terminate replication. The Terminate replication window opens.
Click Terminate replication.
gcloud
Stop replication using the
gcloud compute disks stop-async-replication
command:
gcloud compute disks stop-async-replication DISK_NAME \ --LOCATION_FLAG=LOCATION
Replace the following:
DISK_NAME
: the name of the disk.LOCATION_FLAG
: the location flag for the disk. For a regional disk, use--region
. For a zonal disk, use--zone
.LOCATION
: the disk's region or zone. For regional disks, use the region. For zonal disks, use the zone.
Go
import(
"context"
"fmt"
"io"
compute"cloud.google.com/go/compute/apiv1"
computepb"cloud.google.com/go/compute/apiv1/computepb"
)
// stopReplication stops primary disk replication in a project for a given zone.
funcstopReplication(
wio.Writer,
projectID,primaryDiskName,primaryZonestring,
)error{
// projectID := "your_project_id"
// primaryDiskName := "your_disk_name2"
// primaryZone := "europe-west2-b"
ctx:=context.Background()
disksClient,err:=compute.NewDisksRESTClient(ctx)
iferr!=nil{
returnfmt.Errorf("NewDisksRESTClient: %w",err)
}
deferdisksClient.Close()
req:=&computepb.StopAsyncReplicationDiskRequest{
Project:projectID,
Zone:primaryZone,
Disk:primaryDiskName,
}
op,err:=disksClient.StopAsyncReplication(ctx,req)
iferr!=nil{
returnfmt.Errorf("unable to create disk: %w",err)
}
iferr=op.Wait(ctx);err!=nil{
returnfmt.Errorf("unable to wait for the operation: %w",err)
}
fmt.Fprintf(w,"Replication stopped\n")
returnnil
}
Java
importcom.google.cloud.compute.v1.DisksClient;
importcom.google.cloud.compute.v1.Operation;
importcom.google.cloud.compute.v1.Operation.Status;
importcom.google.cloud.compute.v1.StopAsyncReplicationDiskRequest;
importjava.io.IOException;
importjava.util.concurrent.ExecutionException;
importjava.util.concurrent.TimeUnit;
importjava.util.concurrent.TimeoutException;
publicclass StopZonalDiskReplication{
publicstaticvoidmain(String[]args)
throwsIOException,ExecutionException,InterruptedException,TimeoutException{
// TODO(developer): Replace these variables before running the sample.
// The project that contains the primary disk.
StringprojectId="YOUR_PROJECT_ID";
// Name of the region or zone in which your secondary disk is located.
StringsecondaryDiskLocation="us-east1-b";
// Name of the secondary disk.
StringsecondaryDiskName="SECONDARY_DISK_NAME";
stopZonalDiskAsyncReplication(projectId,secondaryDiskLocation,secondaryDiskName);
}
// Stops asynchronous replication for the specified disk.
publicstaticStatusstopZonalDiskAsyncReplication(
Stringproject,StringsecondaryDiskLocation,StringsecondaryDiskName)
throwsIOException,ExecutionException,InterruptedException,TimeoutException{
// Initialize client that will be used to send requests. This client only needs to be created
// once, and can be reused for multiple requests.
try(DisksClientdisksClient=DisksClient.create()){
StopAsyncReplicationDiskRequeststopReplicationDiskRequest=
StopAsyncReplicationDiskRequest.newBuilder()
.setProject(project)
.setDisk(secondaryDiskName)
.setZone(secondaryDiskLocation)
.build();
Operationresponse=disksClient.stopAsyncReplicationAsync(stopReplicationDiskRequest)
.get(1,TimeUnit.MINUTES);
if(response.hasError()){
thrownewError("Error stopping replication! "+response.getError());
}
returnresponse.getStatus();
}
}
}
Node.js
// Import the Compute library
constcomputeLib=require('@google-cloud/compute');
// Instantiate a diskClient
constdisksClient=newcomputeLib.DisksClient();
// Instantiate a zoneOperationsClient
constzoneOperationsClient=newcomputeLib.ZoneOperationsClient();
/**
* TODO(developer): Update/uncomment these variables before running the sample.
*/
// The project that contains the primary disk.
constprimaryProjectId=awaitdisksClient.getProjectId();
// The zone of the primary disk.
// primaryLocation = 'us-central1-a';
// The name of the primary disk.
// primaryDiskName = 'primary-disk-name';
// Stop replication
asyncfunctioncallStopReplication(){
const[response]=awaitdisksClient.stopAsyncReplication({
project:primaryProjectId,
zone:primaryLocation,
disk:primaryDiskName,
});
letoperation=response.latestResponse;
// Wait for the operation to complete.
while(operation.status!=='DONE'){
[operation]=awaitzoneOperationsClient.wait({
operation:operation.name,
project:primaryProjectId,
zone:operation.zone.split('/').pop(),
});
}
console.log(`Replication for primary disk: ${primaryDiskName} stopped.`);
}
awaitcallStopReplication();
Python
from__future__import annotations
importsys
fromtypingimport Any
fromgoogle.api_core.extended_operationimport ExtendedOperation
fromgoogle.cloudimport compute_v1
defwait_for_extended_operation(
operation: ExtendedOperation, verbose_name: str = "operation", timeout: int = 300
) -> Any:
"""
Waits for the extended (long-running) operation to complete.
If the operation is successful, it will return its result.
If the operation ends with an error, an exception will be raised.
If there were any warnings during the execution of the operation
they will be printed to sys.stderr.
Args:
operation: a long-running operation you want to wait on.
verbose_name: (optional) a more verbose name of the operation,
used only during error and warning reporting.
timeout: how long (in seconds) to wait for operation to finish.
If None, wait indefinitely.
Returns:
Whatever the operation.result() returns.
Raises:
This method will raise the exception received from `operation.exception()`
or RuntimeError if there is no exception set, but there is an `error_code`
set for the `operation`.
In case of an operation taking longer than `timeout` seconds to complete,
a `concurrent.futures.TimeoutError` will be raised.
"""
result = operation.result(timeout=timeout)
if operation.error_code:
print(
f"Error during {verbose_name}: [Code: {operation.error_code}]: {operation.error_message}",
file=sys.stderr,
flush=True,
)
print(f"Operation ID: {operation.name}", file=sys.stderr, flush=True)
raise operation.exception() or RuntimeError(operation.error_message)
if operation.warnings:
print(f"Warnings during {verbose_name}:\n", file=sys.stderr, flush=True)
for warning in operation.warnings:
print(f" - {warning.code}: {warning.message}", file=sys.stderr, flush=True)
return result
defstop_disk_replication(
project_id: str, primary_disk_location: str, primary_disk_name: str
) -> bool:
"""
Stops the asynchronous replication of a disk.
Args:
project_id (str): The ID of the Google Cloud project.
primary_disk_location (str): The location of the primary disk, either a zone or a region.
primary_disk_name (str): The name of the primary disk.
Returns:
bool: True if the replication was successfully stopped.
"""
# Check if the primary disk is in a region or a zone
if primary_disk_location[-1].isdigit():
region_client = compute_v1.RegionDisksClient()
operation = region_client.stop_async_replication(
project=project_id, region=primary_disk_location, disk=primary_disk_name
)
else:
zone_client = compute_v1.DisksClient()
operation = zone_client.stop_async_replication(
project=project_id, zone=primary_disk_location, disk=primary_disk_name
)
wait_for_extended_operation(operation, verbose_name="replication operation")
print(f"Replication for disk {primary_disk_name} stopped.")
return True
REST
Stop replication using one of the following methods:
Stop replication for zonal disks using the
disks.stopAsyncReplication
method:POST https://compute.googleapis.com/compute/v1/projects/PROJECT/zones/LOCATION/disks/DISK_NAME/stopAsyncReplication { }
Stop replication for regional disks using the
regionDisks.stopAsyncReplication
method:POST https://compute.googleapis.com/compute/v1/projects/PROJECT/regions/LOCATION/regionDisks/DISK_NAME/stopAsyncReplication { }
Replace the following:
PROJECT
: the project that contains the disk.DISK_NAME
: the name of the disk.LOCATION
: the zone or region of the disk. For zonal disks, use the zone. For regional disks, use the region.
Terraform
To stop the replication on primary and secondary disks, remove the compute_disk_async_replication
resource.
Stop replication for a consistency group
Stop replication for all disks in a consistency group using the Google Cloud console, the Google Cloud CLI, or REST.
Console
Stop replication for all disks in a consistency group by doing the following:
In the Google Cloud console, go to the Asynchronous replication page.
Click the Consistency groups tab.
Click the name of the consistency group for which you want to stop replication. The Manage consistency group page opens.
Click Terminate replication. The Terminate replication window opens.
Click Terminate replication.
gcloud
Stop replication for all disks in a consistency group using the
gcloud compute disks stop-group-async-replication
command:
gcloud compute disks stop-group-async-replication CONSISTENCY_GROUP \ --LOCATION_FLAG=LOCATION
Replace the following:
CONSISTENCY_GROUP
: the URL of the consistency group. For example,projects/PROJECT/regions/REGION/resourcePolicies/CONSISTENCY_GROUP_NAME
.LOCATION_FLAG
: the location flag for the disks in the consistency group. For regional disks, use--region
. For zonal disks, use--zone
.LOCATION
: the disk's region or zone. For regional disks, use the region. For zonal disks, use the zone.
Go
import(
"context"
"fmt"
"io"
compute"cloud.google.com/go/compute/apiv1"
computepb"cloud.google.com/go/compute/apiv1/computepb"
"google.golang.org/protobuf/proto"
)
// stopReplicationConsistencyGroup stop replication for a consistency group for a project in a given region.
funcstopReplicationConsistencyGroup(wio.Writer,projectID,region,groupNamestring)error{
// projectID := "your_project_id"
// region := "europe-west4"
// groupName := "your_group_name"
ctx:=context.Background()
disksClient,err:=compute.NewRegionDisksRESTClient(ctx)
iferr!=nil{
returnfmt.Errorf("NewResourcePoliciesRESTClient: %w",err)
}
deferdisksClient.Close()
consistencyGroupUrl:=fmt.Sprintf("projects/%s/regions/%s/resourcePolicies/%s",projectID,region,groupName)
req:=&computepb.StopGroupAsyncReplicationRegionDiskRequest{
Project:projectID,
DisksStopGroupAsyncReplicationResourceResource:&computepb.DisksStopGroupAsyncReplicationResource{
ResourcePolicy:proto.String(consistencyGroupUrl),
},
Region:region,
}
op,err:=disksClient.StopGroupAsyncReplication(ctx,req)
iferr!=nil{
returnfmt.Errorf("unable to stop replication: %w",err)
}
iferr=op.Wait(ctx);err!=nil{
returnfmt.Errorf("unable to wait for the operation: %w",err)
}
fmt.Fprintf(w,"Group stopped replicating\n")
returnnil
}
Java
importcom.google.cloud.compute.v1.DisksClient;
importcom.google.cloud.compute.v1.DisksStopGroupAsyncReplicationResource;
importcom.google.cloud.compute.v1.Operation;
importcom.google.cloud.compute.v1.Operation.Status;
importcom.google.cloud.compute.v1.StopGroupAsyncReplicationDiskRequest;
importjava.io.IOException;
importjava.util.concurrent.ExecutionException;
importjava.util.concurrent.TimeUnit;
importjava.util.concurrent.TimeoutException;
publicclass StopZonalDiskReplicationConsistencyGroup{
publicstaticvoidmain(String[]args)
throwsIOException,InterruptedException,ExecutionException,TimeoutException{
// TODO(developer): Replace these variables before running the sample.
// Project ID or project number of the Cloud project that contains the disk.
Stringproject="YOUR_PROJECT_ID";
// Zone of the disk.
Stringzone="us-central1-a";
// Name of the consistency group.
StringconsistencyGroupName="CONSISTENCY_GROUP";
stopZonalDiskReplicationConsistencyGroup(project,zone,consistencyGroupName);
}
// Stops replication of a consistency group for a project in a given zone.
publicstaticStatusstopZonalDiskReplicationConsistencyGroup(
Stringproject,Stringzone,StringconsistencyGroupName)
throwsIOException,InterruptedException,ExecutionException,TimeoutException{
Stringregion=zone.substring(0,zone.lastIndexOf('-'));
StringresourcePolicy=String.format("projects/%s/regions/%s/resourcePolicies/%s",
project,region,consistencyGroupName);
// Initialize client that will be used to send requests. This client only needs to be created
// once, and can be reused for multiple requests.
try(DisksClientdisksClient=DisksClient.create()){
StopGroupAsyncReplicationDiskRequestrequest=
StopGroupAsyncReplicationDiskRequest.newBuilder()
.setProject(project)
.setZone(zone)
.setDisksStopGroupAsyncReplicationResourceResource(
DisksStopGroupAsyncReplicationResource.newBuilder()
.setResourcePolicy(resourcePolicy).build())
.build();
Operationresponse=disksClient.stopGroupAsyncReplicationAsync(request)
.get(3,TimeUnit.MINUTES);
if(response.hasError()){
thrownewError("Error stopping disk replication! "+response.getError());
}
returnresponse.getStatus();
}
}
}
Node.js
// Import the Compute library
constcomputeLib=require('@google-cloud/compute');
constcompute=computeLib.protos.google.cloud.compute.v1;
// If disks are regional- use RegionDisksClient and RegionOperationsClient.
// TODO(developer): Uncomment disksClient and zoneOperationsClient before running the sample.
// Instantiate a disksClient
// disksClient = new computeLib.DisksClient();
// Instantiate a zoneOperationsClient
// zoneOperationsClient = new computeLib.ZoneOperationsClient();
/**
* TODO(developer): Update/uncomment these variables before running the sample.
*/
// The project that contains the consistency group.
constprojectId=awaitdisksClient.getProjectId();
// If you use RegionDisksClient- define region, if DisksClient- define zone.
// The zone or region of the disks.
constdisksLocation='europe-central2-a';
// The name of the consistency group.
constconsistencyGroupName='consistency-group-1';
// The region of the consistency group.
constconsistencyGroupLocation='europe-central2';
asyncfunctioncallStopReplication(){
const[response]=awaitdisksClient.stopGroupAsyncReplication({
project:projectId,
// If you use RegionDisksClient, pass region as an argument instead of zone.
zone:disksLocation,
disksStopGroupAsyncReplicationResourceResource:
newcompute.DisksStopGroupAsyncReplicationResource({
resourcePolicy:[
`https://www.googleapis.com/compute/v1/projects/${projectId}/regions/${consistencyGroupLocation}/resourcePolicies/${consistencyGroupName}`,
],
}),
});
letoperation=response.latestResponse;
// Wait for the operation to complete.
while(operation.status!=='DONE'){
[operation]=awaitzoneOperationsClient.wait({
operation:operation.name,
project:projectId,
// If you use RegionDisksClient, pass region as an argument instead of zone.
zone:operation.zone.split('/').pop(),
});
}
constmessage=`Replication stopped for consistency group: ${consistencyGroupName}.`;
console.log(message);
returnmessage;
}
returnawaitcallStopReplication();
Python
from__future__import annotations
importsys
fromtypingimport Any
fromgoogle.api_core.extended_operationimport ExtendedOperation
fromgoogle.cloudimport compute_v1
defwait_for_extended_operation(
operation: ExtendedOperation, verbose_name: str = "operation", timeout: int = 300
) -> Any:
"""
Waits for the extended (long-running) operation to complete.
If the operation is successful, it will return its result.
If the operation ends with an error, an exception will be raised.
If there were any warnings during the execution of the operation
they will be printed to sys.stderr.
Args:
operation: a long-running operation you want to wait on.
verbose_name: (optional) a more verbose name of the operation,
used only during error and warning reporting.
timeout: how long (in seconds) to wait for operation to finish.
If None, wait indefinitely.
Returns:
Whatever the operation.result() returns.
Raises:
This method will raise the exception received from `operation.exception()`
or RuntimeError if there is no exception set, but there is an `error_code`
set for the `operation`.
In case of an operation taking longer than `timeout` seconds to complete,
a `concurrent.futures.TimeoutError` will be raised.
"""
result = operation.result(timeout=timeout)
if operation.error_code:
print(
f"Error during {verbose_name}: [Code: {operation.error_code}]: {operation.error_message}",
file=sys.stderr,
flush=True,
)
print(f"Operation ID: {operation.name}", file=sys.stderr, flush=True)
raise operation.exception() or RuntimeError(operation.error_message)
if operation.warnings:
print(f"Warnings during {verbose_name}:\n", file=sys.stderr, flush=True)
for warning in operation.warnings:
print(f" - {warning.code}: {warning.message}", file=sys.stderr, flush=True)
return result
defstop_replication_consistency_group(project_id, location, consistency_group_name):
"""
Stops the asynchronous replication for a consistency group.
Args:
project_id (str): The ID of the Google Cloud project.
location (str): The region where the consistency group is located.
consistency_group_id (str): The ID of the consistency group.
Returns:
bool: True if the replication was successfully stopped.
"""
consistency_group = compute_v1.DisksStopGroupAsyncReplicationResource(
resource_policy=f"regions/{location}/resourcePolicies/{consistency_group_name}"
)
region_client = compute_v1.RegionDisksClient()
operation = region_client.stop_group_async_replication(
project=project_id,
region=location,
disks_stop_group_async_replication_resource_resource=consistency_group,
)
wait_for_extended_operation(operation, "Stopping replication for consistency group")
return True
REST
Stop replication for all disks in a consistency group using one of the following methods:
Stop replication for zonal disks using the
disks.stopGroupAsyncReplication
method:POST https://compute.googleapis.com/compute/v1/projects/PROJECT/zones/LOCATION/disks/stopGroupAsyncReplication { "resourcePolicy": "CONSISTENCY_GROUP" }
Stop replication for regional disks using the
regionDisks.stopGroupAsyncReplication
method:POST https://compute.googleapis.com/compute/v1/projects/PROJECT/regions/LOCATION/regionDisks/DISK_NAME/stopAsyncReplication { "resourcePolicy": "CONSISTENCY_GROUP" }
Replace the following:
DISK_NAME
: the name of the diskLOCATION
: the zone or region of the disk. For zonal disks, use the zone. For regional disks, use the region.CONSISTENCY_GROUP
: the URL of the consistency group. For example,projects/PROJECT/regions/REGION/resourcePolicies/CONSISTENCY_GROUP_NAME
.
What's next
- Learn how to failover and failback.
- Learn how to monitor Asynchronous Replication performance.