Compose objects
This page shows you how to compose Cloud Storage objects into a single object. A compose request takes between 1 and 32 objects and creates a new, composite object. The composite object is a concatenation of the source objects in the order they were specified in the request.
Note the following when composing objects:
- The source objects are unaffected by the composition process. If they are meant to be temporary, you must delete them once you've successfully completed the composition.
- Because other storage classes are subject to early deletion fees, you should always use Standard storage for temporary objects.
- Frequent deletions associated with compose can increase your storage bill significantly if your bucket has data protection features enabled. Consider disabling soft delete on buckets with high rates of compose operations. With Object Versioning, specify the object versions when deleting source objects to permanently delete them and avoid their becoming noncurrent objects.
Required roles
To get the permissions that you need to compose objects, ask your
administrator to grant you the Storage Object User (roles/storage.objectUser)
IAM role on the bucket. This predefined role contains the
permissions required to compose objects. To see the exact permissions that are
required, expand the Required permissions section:
Required permissions
storage.objects.createstorage.objects.delete- This permission is only required if you want to give the object you compose the same name as an object that already exists in the bucket.
storage.objects.getstorage.objects.list- This permission is only required if you want to use wildcards to compose objects with a common prefix without having to list each object separately in your Google Cloud CLI command.
If you want to set a retention configuration for the object
you compose, you'll also need the storage.objects.setRetention permission. To
get this permission, ask your administrator to grant you the Storage Object
Admin (roles/storage.objectAdmin) role instead of the Storage Object User
(roles/storage.objectUser) role.
You can also get these permissions with other predefined roles or custom roles.
For information about granting roles on buckets, see Set and manage IAM policies on buckets.
Create a composite object
Command line
Use the gcloud storage objects compose command:
gcloud storage objects compose gs://BUCKET_NAME/SOURCE_OBJECT_1 gs://BUCKET_NAME/SOURCE_OBJECT_2 gs://BUCKET_NAME/COMPOSITE_OBJECT_NAME
Where:
BUCKET_NAMEis the name of the bucket that contains the source objects.SOURCE_OBJECT_1andSOURCE_OBJECT_2are the names of the source objects to use in the object composition.COMPOSITE_OBJECT_NAMEis the name you are giving to the result of the object composition.
Client libraries
For more information, see the
Cloud Storage C++ API
reference documentation.
To authenticate to Cloud Storage, set up Application Default Credentials.
For more information, see
Set up authentication for client libraries.
For more information, see the
Cloud Storage C# API
reference documentation.
To authenticate to Cloud Storage, set up Application Default Credentials.
For more information, see
Set up authentication for client libraries.
For more information, see the
Cloud Storage Go API
reference documentation.
To authenticate to Cloud Storage, set up Application Default Credentials.
For more information, see
Set up authentication for client libraries.
For more information, see the
Cloud Storage Java API
reference documentation.
To authenticate to Cloud Storage, set up Application Default Credentials.
For more information, see
Set up authentication for client libraries.
For more information, see the
Cloud Storage Node.js API
reference documentation.
To authenticate to Cloud Storage, set up Application Default Credentials.
For more information, see
Set up authentication for client libraries.
For more information, see the
Cloud Storage PHP API
reference documentation.
To authenticate to Cloud Storage, set up Application Default Credentials.
For more information, see
Set up authentication for client libraries.
For more information, see the
Cloud Storage Python API
reference documentation.
To authenticate to Cloud Storage, set up Application Default Credentials.
For more information, see
Set up authentication for client libraries.
For more information, see the
Cloud Storage Ruby API
reference documentation.
To authenticate to Cloud Storage, set up Application Default Credentials.
For more information, see
Set up authentication for client libraries.
C++
namespacegcs=::google::cloud::storage;
using::google::cloud::StatusOr;
[](gcs::Clientclient,std::stringconst&bucket_name,
std::stringconst&destination_object_name,
std::vector<gcs::ComposeSourceObject>const&compose_objects){
StatusOr<gcs::ObjectMetadata>composed_object=client.ComposeObject(
bucket_name,compose_objects,destination_object_name);
if(!composed_object)throwstd::move(composed_object).status();
std::cout << "Composed new object " << composed_object->name()
<< " in bucket " << composed_object->bucket()
<< "\nFull metadata: " << *composed_object << "\n";
}C#
usingGoogle.Apis.Storage.v1.Data;
usingGoogle.Cloud.Storage.V1 ;
usingSystem;
usingSystem.Collections.Generic;
publicclassComposeObjectSample
{
publicvoidComposeObject(
stringbucketName="your-bucket-name",
stringfirstObjectName="your-first-object-name",
stringsecondObjectName="your-second-object-name",
stringtargetObjectName="new-composite-object-name")
{
varstorage=StorageClient .Create ();
varsourceObjects=newList<ComposeRequest.SourceObjectsData>
{
newComposeRequest.SourceObjectsData{Name=firstObjectName},
newComposeRequest.SourceObjectsData{Name=secondObjectName}
};
//You could add as many sourceObjects as you want here, up to the max of 32.
storage.Service.Objects.Compose(newComposeRequest
{
SourceObjects=sourceObjects,
Destination=newGoogle.Apis.Storage.v1.Data.Object{ContentType="text/plain"}
},bucketName,targetObjectName).Execute();
Console.WriteLine($"New composite file {targetObjectName} was created in bucket {bucketName}"+
$" by combining {firstObjectName} and {secondObjectName}.");
}
}
Go
import(
"context"
"fmt"
"io"
"time"
"cloud.google.com/go/storage"
)
// composeFile composes source objects to create a composite object.
funccomposeFile(wio.Writer ,bucket,object1,object2,toObjectstring)error{
// bucket := "bucket-name"
// object1 := "object-name-1"
// object2 := "object-name-2"
// toObject := "object-name-3"
ctx:=context.Background()
client,err:=storage.NewClient(ctx)
iferr!=nil{
returnfmt.Errorf("storage.NewClient: %w",err)
}
deferclient.Close()
ctx,cancel:=context.WithTimeout(ctx,time.Second*10)
defercancel()
src1:=client.Bucket (bucket).Object (object1)
src2:=client.Bucket (bucket).Object (object2)
dst:=client.Bucket (bucket).Object (toObject)
// ComposerFrom takes varargs, so you can put as many objects here
// as you want.
_,err=dst.ComposerFrom (src1,src2).Run(ctx)
iferr!=nil{
returnfmt.Errorf("ComposerFrom: %w",err)
}
fmt.Fprintf(w,"New composite object %v was created by combining %v and %v\n",toObject,object1,object2)
returnnil
}
Java
importcom.google.cloud.storage.Blob ;
importcom.google.cloud.storage.BlobInfo ;
importcom.google.cloud.storage.Storage ;
importcom.google.cloud.storage.StorageOptions ;
publicclass ComposeObject{
publicstaticvoidcomposeObject(
StringbucketName,
StringfirstObjectName,
StringsecondObjectName,
StringtargetObjectName,
StringprojectId){
// The ID of your GCP project
// String projectId = "your-project-id";
// The ID of your GCS bucket
// String bucketName = "your-unique-bucket-name";
// The ID of the first GCS object to compose
// String firstObjectName = "your-first-object-name";
// The ID of the second GCS object to compose
// String secondObjectName = "your-second-object-name";
// The ID to give the new composite object
// String targetObjectName = "new-composite-object-name";
Storage storage=StorageOptions .newBuilder().setProjectId(projectId).build().getService ();
// Optional: set a generation-match precondition to avoid potential race
// conditions and data corruptions. The request returns a 412 error if the
// preconditions are not met.
Storage .BlobTargetOptionprecondition;
if(storage.get (bucketName,targetObjectName)==null){
// For a target object that does not yet exist, set the DoesNotExist precondition.
// This will cause the request to fail if the object is created before the request runs.
precondition=Storage .BlobTargetOption.doesNotExist();
}else{
// If the destination already exists in your bucket, instead set a generation-match
// precondition. This will cause the request to fail if the existing object's generation
// changes before the request runs.
precondition=
Storage .BlobTargetOption.generationMatch(
storage.get (bucketName,targetObjectName).getGeneration());
}
Storage .ComposeRequest composeRequest=
Storage .ComposeRequest.newBuilder()
// addSource takes varargs, so you can put as many objects here as you want, up to the
// max of 32
.addSource (firstObjectName,secondObjectName)
.setTarget(BlobInfo .newBuilder(bucketName,targetObjectName).build())
.setTargetOptions(precondition)
.build();
Blob compositeObject=storage.compose (composeRequest);
System.out.println(
"New composite object "
+compositeObject.getName()
+" was created by combining "
+firstObjectName
+" and "
+secondObjectName);
}
}Node.js
/**
* TODO(developer): Uncomment the following lines before running the sample.
*/
// The ID of your GCS bucket
// const bucketName = 'your-unique-bucket-name';
// The ID of the first GCS file to compose
// const firstFileName = 'your-first-file-name';
// The ID of the second GCS file to compose
// const secondFileName = 'your-second-file-name';
// The ID to give the new composite file
// const destinationFileName = 'new-composite-file-name';
// Imports the Google Cloud client library
const{Storage}=require('@google-cloud/storage');
// Creates a client
conststorage=newStorage();
asyncfunctioncomposeFile(){
constbucket=storage.bucket(bucketName);
constsources=[firstFileName,secondFileName];
// Optional:
// Set a generation-match precondition to avoid potential race conditions
// and data corruptions. The request to compose is aborted if the object's
// generation number does not match your precondition. For a destination
// object that does not yet exist, set the ifGenerationMatch precondition to 0
// If the destination object already exists in your bucket, set instead a
// generation-match precondition using its generation number.
constcombineOptions={
ifGenerationMatch:destinationGenerationMatchPrecondition,
};
awaitbucket.combine (sources,destinationFileName,combineOptions);
console.log(
`New composite file ${destinationFileName} was created by combining ${firstFileName} and ${secondFileName}`
);
}
composeFile().catch(console.error);PHP
use Google\Cloud\Storage\StorageClient;
/**
* Compose two objects into a single target object.
*
* @param string $bucketName The name of your Cloud Storage bucket.
* (e.g. 'my-bucket')
* @param string $firstObjectName The name of the first GCS object to compose.
* (e.g. 'my-object-1')
* @param string $secondObjectName The name of the second GCS object to compose.
* (e.g. 'my-object-2')
* @param string $targetObjectName The name of the object to be created.
* (e.g. 'composed-my-object-1-my-object-2')
*/
function compose_file(string $bucketName, string $firstObjectName, string $secondObjectName, string $targetObjectName): void
{
$storage = new StorageClient();
$bucket = $storage->bucket($bucketName);
// In this example, we are composing only two objects, but Cloud Storage supports
// composition of up to 32 objects.
$objectsToCompose = [$firstObjectName, $secondObjectName];
$targetObject = $bucket->compose($objectsToCompose, $targetObjectName, [
'destination' => [
'contentType' => 'application/octet-stream'
]
]);
if ($targetObject->exists()) {
printf(
'New composite object %s was created by combining %s and %s',
$targetObject->name(),
$firstObjectName,
$secondObjectName
);
}
}Python
fromgoogle.cloudimport storage
defcompose_file(bucket_name, first_blob_name, second_blob_name, destination_blob_name):
"""Concatenate source blobs into destination blob."""
# bucket_name = "your-bucket-name"
# first_blob_name = "first-object-name"
# second_blob_name = "second-blob-name"
# destination_blob_name = "destination-object-name"
storage_client = storage .Client ()
bucket = storage_client.bucket (bucket_name)
destination = bucket.blob(destination_blob_name)
destination.content_type = "text/plain"
# Note sources is a list of Blob instances, up to the max of 32 instances per request
sources = [bucket.blob(first_blob_name), bucket.blob(second_blob_name)]
# Optional: set a generation-match precondition to avoid potential race conditions
# and data corruptions. The request to compose is aborted if the object's
# generation number does not match your precondition. For a destination
# object that does not yet exist, set the if_generation_match precondition to 0.
# If the destination object already exists in your bucket, set instead a
# generation-match precondition using its generation number.
# There is also an `if_source_generation_match` parameter, which is not used in this example.
destination_generation_match_precondition = 0
destination.compose (sources, if_generation_match=destination_generation_match_precondition)
print(
"New composite object {} in the bucket {} was created by combining {} and {}".format(
destination_blob_name, bucket_name, first_blob_name, second_blob_name
)
)
return destination
Ruby
defcompose_filebucket_name:,first_file_name:,second_file_name:,destination_file_name:
# The ID of your GCS bucket
# bucket_name = "your-unique-bucket-name"
# The ID of the first GCS object to compose
# first_file_name = "your-first-file-name"
# The ID of the second GCS object to compose
# second_file_name = "your-second-file-name"
# The ID to give the new composite object
# destination_file_name = "new-composite-file-name"
require"google/cloud/storage"
storage=Google::Cloud::Storage .new
bucket=storage.bucketbucket_name,skip_lookup:true
destination=bucket.compose [first_file_name,second_file_name],destination_file_namedo|f|
f.content_type="text/plain"
end
puts"Composed new file #{destination.name} in the bucket #{bucket_name} "\
"by combining #{first_file_name} and #{second_file_name}"
end
REST APIs
JSON API
Have gcloud CLI installed and initialized, which lets you generate an access token for the
Authorizationheader.Create a JSON file that contains the following information:
{ "sourceObjects":[ { "name":"SOURCE_OBJECT_1" }, { "name":"SOURCE_OBJECT_2" } ], "destination":{ "contentType":"COMPOSITE_OBJECT_CONTENT_TYPE" } }
Where:
SOURCE_OBJECT_1andSOURCE_OBJECT_2are the names of the source objects to use in the object composition.COMPOSITE_OBJECT_CONTENT_TYPEis the Content-Type of the resulting composite object.
Use
cURLto call the JSON API with aPOSTObject request:curl -X POST --data-binary @JSON_FILE_NAME \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://storage.googleapis.com/storage/v1/b/BUCKET_NAME/o/COMPOSITE_OBJECT_NAME/compose"
Where:
JSON_FILE_NAMEis the name of the file you created in the previous step.BUCKET_NAMEis the name of the bucket that contains the source objects.COMPOSITE_OBJECT_NAMEis the name you are giving to the result of the object composition.
If successful, the response is an object resource for the resulting composite object.
XML API
Have gcloud CLI installed and initialized, which lets you generate an access token for the
Authorizationheader.Create an XML file that contains the following information:
<ComposeRequest> <Component> <Name>SOURCE_OBJECT_1</Name> </Component> <Component> <Name>SOURCE_OBJECT_2</Name> </Component> </ComposeRequest>
Where:
SOURCE_OBJECT_1andSOURCE_OBJECT_2are the names of the source objects to use in the object composition.
Use
cURLto call the XML API with aPUTObject request that includes thecomposequery string parameter:curl -X PUT --data-binary @XML_FILE_NAME \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: COMPOSITE_OBJECT_CONTENT_TYPE" \ "https://storage.googleapis.com/BUCKET_NAME/COMPOSITE_OBJECT_NAME?compose"
Where:
XML_FILE_NAMEis the name of the file you created in the previous step.COMPOSITE_OBJECT_CONTENT_TYPEis the Content-Type of the resulting composite object.BUCKET_NAMEis the name of the bucket that contains the source objects.COMPOSITE_OBJECT_NAMEis the name you are giving to the result of the object composition.
If successful, an empty response body is returned.
What's next
- Learn more about object composition.
- Learn how to use request preconditions to prevent race conditions.