Update dataset properties
This document describes how to update dataset properties in BigQuery. After you create a dataset, you can update the following dataset properties:
- Billing model
- Default expiration time for new tables
- Default partition expiration for new partitioned tables
- Default rounding mode for new tables
- Description
- Labels
- Time travel windows
Before you begin
Grant Identity and Access Management (IAM) roles that give users the necessary permissions to perform each task in this document.
Required permissions
To update dataset properties, you need the following IAM permissions:
bigquery.datasets.update
bigquery.datasets.setIamPolicy
(only required when updating dataset access controls in the Google Cloud console)
The roles/bigquery.dataOwner
predefined IAM role includes the
permissions that you need to update dataset properties.
Additionally, if you have the bigquery.datasets.create
permission, you can
update properties of the datasets that you create.
For more information on IAM roles and permissions in BigQuery, see Predefined roles and permissions.
Update dataset descriptions
You can update a dataset's description in the following ways:
- Using the Google Cloud console.
- Using the bq command-line tool's
bq update
command. - Calling the
datasets.patch
API method. - Using the client libraries.
To update a dataset's description:
Console
In the left pane, click
Explorer:Highlighted button for the Explorer pane.
If you don't see the left pane, click
Expand left pane to open the pane.In the Explorer pane, expand your project, click Datasets, and then click a dataset.
In the Details pane, click
Edit details to edit the description text.In the Edit detail dialog that appears, do the following:
- In the Description field, enter a description or edit the existing description.
- To save the new description text, click Save.
SQL
To update a dataset's description, use the
ALTER SCHEMA SET OPTIONS
statement
to set the description
option.
The following example sets the description on a dataset named mydataset
:
In the Google Cloud console, go to the BigQuery page.
In the query editor, enter the following statement:
ALTERSCHEMAmydataset SETOPTIONS( description='Description of mydataset');
Click
Run.
For more information about how to run queries, see Run an interactive query.
bq
Issue the bq update
command with the --description
flag. If you are
updating a dataset in a project other than your default project, add the
project ID to the dataset name in the following format:
project_id:dataset
.
bqupdate\ --description"string"\ project_id:dataset
Replace the following:
string
: the text that describes the dataset, in quotesproject_id
: your project IDdataset
: the name of the dataset that you're updating
Examples:
Enter the following command to change the description of mydataset
to
"Description of mydataset." mydataset
is in your default project.
bq update --description "Description of mydataset" mydataset
Enter the following command to change the description of mydataset
to
"Description of mydataset." The dataset is in myotherproject
, not your
default project.
bq update \
--description "Description of mydataset" \
myotherproject:mydataset
API
Call datasets.patch
and
update the description
property in the
dataset resource.
Because the datasets.update
method replaces the entire dataset resource,
the datasets.patch
method is preferred.
Go
Before trying this sample, follow the Go setup instructions in the
BigQuery quickstart using
client libraries.
For more information, see the
BigQuery Go API
reference documentation.
To authenticate to BigQuery, set up Application Default Credentials.
For more information, see
Set up authentication for client libraries.
import(
"context"
"fmt"
"cloud.google.com/go/bigquery"
)
// updateDatasetDescription demonstrates how the Description metadata of a dataset can
// be read and modified.
funcupdateDatasetDescription(projectID,datasetIDstring)error{
// projectID := "my-project-id"
// datasetID := "mydataset"
ctx:=context.Background()
client,err:=bigquery.NewClient(ctx,projectID)
iferr!=nil{
returnfmt.Errorf("bigquery.NewClient: %v",err)
}
deferclient.Close()
ds:=client.Dataset(datasetID)
meta,err:=ds.Metadata(ctx)
iferr!=nil{
returnerr
}
update:=bigquery.DatasetMetadataToUpdate {
Description:"Updated Description.",
}
if_,err=ds.Update(ctx,update,meta.ETag);err!=nil{
returnerr
}
returnnil
}
Java
Before trying this sample, follow the Java setup instructions in the
BigQuery quickstart using
client libraries.
For more information, see the
BigQuery Java API
reference documentation.
To authenticate to BigQuery, set up Application Default Credentials.
For more information, see
Set up authentication for client libraries.
importcom.google.cloud.bigquery.BigQuery ;
importcom.google.cloud.bigquery.BigQueryException ;
importcom.google.cloud.bigquery.BigQueryOptions ;
importcom.google.cloud.bigquery.Dataset ;
publicclass UpdateDatasetDescription{
publicstaticvoidrunUpdateDatasetDescription(){
// TODO(developer): Replace these variables before running the sample.
StringdatasetName="MY_DATASET_NAME";
StringnewDescription="this is the new dataset description";
updateDatasetDescription(datasetName,newDescription);
}
publicstaticvoidupdateDatasetDescription(StringdatasetName,StringnewDescription){
try{
// Initialize client that will be used to send requests. This client only needs to be created
// once, and can be reused for multiple requests.
BigQuery bigquery=BigQueryOptions .getDefaultInstance().getService();
Dataset dataset=bigquery.getDataset (datasetName);
bigquery.update (dataset.toBuilder().setDescription(newDescription).build());
System.out.println("Dataset description updated successfully to "+newDescription);
}catch(BigQueryException e){
System.out.println("Dataset description was not updated \n"+e.toString());
}
}
}
Node.js
Before trying this sample, follow the Node.js setup instructions in the
BigQuery quickstart using
client libraries.
For more information, see the
BigQuery Node.js API
reference documentation.
To authenticate to BigQuery, set up Application Default Credentials.
For more information, see
Set up authentication for client libraries.
// Import the Google Cloud client library
const{BigQuery}=require('@google-cloud/bigquery');
constbigquery=newBigQuery ();
asyncfunctionupdateDatasetDescription(){
// Updates a dataset's description.
// Retreive current dataset metadata
constdataset=bigquery.dataset(datasetId);
const[metadata]=awaitdataset.getMetadata();
// Set new dataset description
constdescription='New dataset description.';
metadata.description=description;
const[apiResponse]=awaitdataset.setMetadata(metadata);
constnewDescription=apiResponse.description;
console.log(`${datasetId} description: ${newDescription}`);
}
Python
Before trying this sample, follow the Python setup instructions in the
BigQuery quickstart using
client libraries.
For more information, see the
BigQuery Python API
reference documentation.
To authenticate to BigQuery, set up Application Default Credentials.
For more information, see
Set up authentication for client libraries.
fromgoogle.cloudimport bigquery
# Construct a BigQuery client object.
client = bigquery .Client ()
# TODO(developer): Set dataset_id to the ID of the dataset to fetch.
# dataset_id = 'your-project.your_dataset'
dataset = client.get_dataset (dataset_id) # Make an API request.
dataset.description = "Updated description."
dataset = client.update_dataset (dataset, ["description"]) # Make an API request.
full_dataset_id = "{}.{}".format(dataset.project, dataset.dataset_id)
print(
"Updated dataset '{}' with description '{}'.".format(
full_dataset_id, dataset.description
)
)
Update default table expiration times
You can update a dataset's default table expiration time in the following ways:
- Using the Google Cloud console.
- Using the bq command-line tool's
bq update
command. - Calling the
datasets.patch
API method. - Using the client libraries.
You can set a default table expiration time at the dataset level, or you can set a table's expiration time when the table is created. If you set the expiration when the table is created, the dataset's default table expiration is ignored. If you don't set a default table expiration at the dataset level, and you don't set a table expiration when the table is created, the table never expires and you must delete the table manually. When a table expires, it's deleted along with all of the data it contains.
When you update a dataset's default table expiration setting:
- If you change the value from
Never
to a defined expiration time, any tables that already exist in the dataset won't expire unless the expiration time was set on the table when it was created. - If you are changing the value for the default table expiration, any tables that already exist expire according to the original table expiration setting. Any new tables created in the dataset have the new table expiration setting applied unless you specify a different table expiration on the table when it is created.
The value for default table expiration is expressed differently depending on where the value is set. Use the method that gives you the appropriate level of granularity:
- In the Google Cloud console, expiration is expressed in days.
- In the bq command-line tool, expiration is expressed in seconds.
- In the API, expiration is expressed in milliseconds.
To update the default expiration time for a dataset:
Console
In the left pane, click
Explorer:Highlighted button for the Explorer pane.
In the Explorer pane, expand your project, click Datasets, and then click a dataset.
In the Details tab, click
Edit details to edit the expiration time.In the Edit detail dialog, in the Default table expiration section, select Enable table expiration and enter a value for Default maximum table age.
Click Save.
SQL
To update the default table expiration time, use the
ALTER SCHEMA SET OPTIONS
statement
to set the default_table_expiration_days
option.
The following example updates the default table expiration for a dataset
named mydataset
.
In the Google Cloud console, go to the BigQuery page.
In the query editor, enter the following statement:
ALTERSCHEMAmydataset SETOPTIONS( default_table_expiration_days=3.75);
Click
Run.
For more information about how to run queries, see Run an interactive query.
bq
To update the default expiration time for newly created tables in a dataset,
enter the bq update
command with the --default_table_expiration
flag.
If you are updating a dataset in a project other than your default project,
add the project ID to the dataset name in the following format:
project_id:dataset
.
bqupdate\ --default_table_expirationinteger\ project_id:dataset
Replace the following:
integer
: the default lifetime, in seconds, for newly created tables. The minimum value is 3600 seconds (one hour). The expiration time evaluates to the current UTC time plus the integer value. Specify0
to remove the existing expiration time. Any table created in the dataset is deletedinteger
seconds after its creation time. This value is applied if you do not set a table expiration when the table is created.project_id
: your project ID.dataset
: the name of the dataset that you're updating.
Examples:
Enter the following command to set the default table expiration for
new tables created in mydataset
to two hours (7200 seconds) from the
current time. The dataset is in your default project.
bq update --default_table_expiration 7200 mydataset
Enter the following command to set the default table expiration for
new tables created in mydataset
to two hours (7200 seconds) from the
current time. The dataset is in myotherproject
, not your default project.
bq update --default_table_expiration 7200 myotherproject:mydataset
API
Call datasets.patch
and
update the defaultTableExpirationMs
property in the
dataset resource.
The expiration is expressed in milliseconds in the API. Because the
datasets.update
method replaces the entire dataset resource, the
datasets.patch
method is preferred.
Go
Before trying this sample, follow the Go setup instructions in the
BigQuery quickstart using
client libraries.
For more information, see the
BigQuery Go API
reference documentation.
To authenticate to BigQuery, set up Application Default Credentials.
For more information, see
Set up authentication for client libraries.
import(
"context"
"fmt"
"time"
"cloud.google.com/go/bigquery"
)
// updateDatasetDefaultExpiration demonstrats setting the default expiration of a dataset
// to a specific retention period.
funcupdateDatasetDefaultExpiration(projectID,datasetIDstring)error{
// projectID := "my-project-id"
// datasetID := "mydataset"
ctx:=context.Background()
client,err:=bigquery.NewClient(ctx,projectID)
iferr!=nil{
returnfmt.Errorf("bigquery.NewClient: %v",err)
}
deferclient.Close()
ds:=client.Dataset(datasetID)
meta,err:=ds.Metadata(ctx)
iferr!=nil{
returnerr
}
update:=bigquery.DatasetMetadataToUpdate {
DefaultTableExpiration:24*time.Hour,
}
if_,err:=client.Dataset(datasetID).Update(ctx,update,meta.ETag);err!=nil{
returnerr
}
returnnil
}
Java
Before trying this sample, follow the Java setup instructions in the
BigQuery quickstart using
client libraries.
For more information, see the
BigQuery Java API
reference documentation.
To authenticate to BigQuery, set up Application Default Credentials.
For more information, see
Set up authentication for client libraries.
Configure the default expiration time with the Dataset.Builder.setDefaultTableLifetime() method.
importcom.google.cloud.bigquery.BigQuery ;
importcom.google.cloud.bigquery.BigQueryException ;
importcom.google.cloud.bigquery.BigQueryOptions ;
importcom.google.cloud.bigquery.Dataset ;
importjava.util.concurrent.TimeUnit;
publicclass UpdateDatasetExpiration{
publicstaticvoidrunUpdateDatasetExpiration(){
// TODO(developer): Replace these variables before running the sample.
StringdatasetName="MY_DATASET_NAME";
updateDatasetExpiration(datasetName);
}
publicstaticvoidupdateDatasetExpiration(StringdatasetName){
try{
// Initialize client that will be used to send requests. This client only needs to be created
// once, and can be reused for multiple requests.
BigQuery bigquery=BigQueryOptions .getDefaultInstance().getService();
// Update dataset expiration to one day
LongnewExpiration=TimeUnit.MILLISECONDS.convert (1,TimeUnit.DAYS);
Dataset dataset=bigquery.getDataset (datasetName);
bigquery.update (dataset.toBuilder().setDefaultTableLifetime(newExpiration).build());
System.out.println("Dataset description updated successfully to "+newExpiration);
}catch(BigQueryException e){
System.out.println("Dataset expiration was not updated \n"+e.toString());
}
}
}
Node.js
Before trying this sample, follow the Node.js setup instructions in the
BigQuery quickstart using
client libraries.
For more information, see the
BigQuery Node.js API
reference documentation.
To authenticate to BigQuery, set up Application Default Credentials.
For more information, see
Set up authentication for client libraries.
// Import the Google Cloud client library
const{BigQuery}=require('@google-cloud/bigquery');
constbigquery=newBigQuery ();
asyncfunctionupdateDatasetExpiration(){
// Updates the lifetime of all tables in the dataset, in milliseconds.
/**
* TODO(developer): Uncomment the following lines before running the sample.
*/
// const datasetId = "my_dataset";
// Retreive current dataset metadata
constdataset=bigquery.dataset(datasetId);
const[metadata]=awaitdataset.getMetadata();
// Set new dataset metadata
constexpirationTime=24*60*60*1000;
metadata.defaultTableExpirationMs=expirationTime.toString();
const[apiResponse]=awaitdataset.setMetadata(metadata);
constnewExpirationTime=apiResponse.defaultTableExpirationMs;
console.log(`${datasetId} expiration: ${newExpirationTime}`);
}
Python
Before trying this sample, follow the Python setup instructions in the
BigQuery quickstart using
client libraries.
For more information, see the
BigQuery Python API
reference documentation.
To authenticate to BigQuery, set up Application Default Credentials.
For more information, see
Set up authentication for client libraries.
fromgoogle.cloudimport bigquery
# Construct a BigQuery client object.
client = bigquery .Client ()
# TODO(developer): Set dataset_id to the ID of the dataset to fetch.
# dataset_id = 'your-project.your_dataset'
dataset = client.get_dataset (dataset_id) # Make an API request.
dataset.default_table_expiration_ms = 24 * 60 * 60 * 1000 # In milliseconds.
dataset = client.update_dataset (
dataset, ["default_table_expiration_ms"]
) # Make an API request.
full_dataset_id = "{}.{}".format(dataset.project, dataset.dataset_id)
print(
"Updated dataset {} with new expiration {}".format(
full_dataset_id, dataset.default_table_expiration_ms
)
)
Update default partition expiration times
You can update a dataset's default partition expiration in the following ways:
- Using the bq command-line tool's
bq update
command. - Calling the
datasets.patch
API method. - Using the client libraries.
Setting or updating a dataset's default partition expiration isn't currently supported by the Google Cloud console.
You can set a default partition expiration time at the dataset level that affects all newly created partitioned tables, or you can set a partition expiration time for individual tables when the partitioned tables are created. If you set the default partition expiration at the dataset level, and you set the default table expiration at the dataset level, new partitioned tables will only have a partition expiration. If both options are set, the default partition expiration overrides the default table expiration.
If you set the partition expiration time when the partitioned table is created, that value overrides the dataset-level default partition expiration if it exists.
If you do not set a default partition expiration at the dataset level, and you do not set a partition expiration when the table is created, the partitions never expire and you must delete the partitions manually.
When you set a default partition expiration on a dataset, the expiration applies to all partitions in all partitioned tables created in the dataset. When you set the partition expiration on a table, the expiration applies to all partitions created in the specified table. Currently, you cannot apply different expiration times to different partitions in the same table.
When you update a dataset's default partition expiration setting:
- If you change the value from
never
to a defined expiration time, any partitions that already exist in partitioned tables in the dataset will not expire unless the partition expiration time was set on the table when it was created. - If you are changing the value for the default partition expiration, any partitions in existing partitioned tables expire according to the original default partition expiration. Any new partitioned tables created in the dataset have the new default partition expiration setting applied unless you specify a different partition expiration on the table when it is created.
The value for default partition expiration is expressed differently depending on where the value is set. Use the method that gives you the appropriate level of granularity:
- In the bq command-line tool, expiration is expressed in seconds.
- In the API, expiration is expressed in milliseconds.
To update the default partition expiration time for a dataset:
Console
Updating a dataset's default partition expiration is not currently supported by the Google Cloud console.
SQL
To update the default partition expiration time, use the
ALTER SCHEMA SET OPTIONS
statement
to set the default_partition_expiration_days
option.
The following example updates the default partition expiration for a
dataset named mydataset
:
In the Google Cloud console, go to the BigQuery page.
In the query editor, enter the following statement:
ALTERSCHEMAmydataset SETOPTIONS( default_partition_expiration_days=3.75);
Click
Run.
For more information about how to run queries, see Run an interactive query.
bq
To update the default expiration time for a dataset, enter the bq update
command with the --default_partition_expiration
flag. If you are updating
a dataset in a project other than your default project,
add the project ID to the dataset name in the following format:
project_id:dataset
.
bqupdate\ --default_partition_expirationinteger\ project_id:dataset
Replace the following:
integer
: the default lifetime, in seconds, for partitions in newly created partitioned tables. This flag has no minimum value. Specify0
to remove the existing expiration time. Any partitions in newly created partitioned tables are deletedinteger
seconds after the partition's UTC date. This value is applied if you do not set a partition expiration on the table when it is created.project_id
: your project ID.dataset
: the name of the dataset that you're updating.
Examples:
Enter the following command to set the default partition expiration for
new partitioned tables created in mydataset
to 26 hours (93,600 seconds).
The dataset is in your default project.
bq update --default_partition_expiration 93600 mydataset
Enter the following command to set the default partition expiration for
new partitioned tables created in mydataset
to 26 hours (93,600 seconds).
The dataset is in myotherproject
, not your default project.
bq update --default_partition_expiration 93600 myotherproject:mydataset
API
Call datasets.patch
and
update the defaultPartitionExpirationMs
property in the
dataset resource.
The expiration is expressed in milliseconds. Because the datasets.update
method replaces the entire dataset resource, the datasets.patch
method is
preferred.
Update rounding mode
You can update a dataset's default rounding mode
by using the
ALTER SCHEMA SET OPTIONS
DDL statement.
The following example updates the default rounding mode for mydataset
to
ROUND_HALF_EVEN
.
ALTERSCHEMAmydataset SETOPTIONS( default_rounding_mode="ROUND_HALF_EVEN");
This sets the default rounding mode for new tables created in the dataset. It has no impact on new columns added to existing tables. Setting the default rounding mode on a table in the dataset overrides this option.
Update time travel windows
You can update a dataset's time travel window in the following ways:
- Using the Google Cloud console.
- Using the
ALTER SCHEMA SET OPTIONS
statement. - Using the bq command-line tool's
bq update
command. - Calling the
datasets.patch
ordatasets.update
API method. Theupdate
method replaces the entire dataset resource, whereas thepatch
method only replaces fields that are provided in the submitted dataset resource.
For more information on the time travel window, see Configure the time travel window.
To update the time travel window for a dataset:
Console
In the left pane, click
Explorer:Highlighted button for the Explorer pane.
In the Explorer pane, expand your project, click Datasets, and then click a dataset.
In the Details tab, click
Edit details.Expand Advanced options, then select the Time travel window to use.
Click Save.
SQL
Use the
ALTER SCHEMA SET OPTIONS
statement with the max_time_travel_hours
option to specify the time travel
window when altering a dataset. The max_time_travel_hours
value must
be an integer expressed in multiples of 24 (48, 72, 96, 120, 144, 168)
between 48 (2 days) and 168 (7 days).
In the Google Cloud console, go to the BigQuery page.
In the query editor, enter the following statement:
ALTERSCHEMADATASET_NAME SETOPTIONS( max_time_travel_hours=HOURS);
Replace the following:
DATASET_NAME
: the name of the dataset that you're updatingHOURS
with the time travel window's duration in hours.
Click
Run.
For more information about how to run queries, see Run an interactive query.
bq
Use the bq update
command with the --max_time_travel_hours
flag to specify the time travel
window when altering a dataset. The --max_time_travel_hours
value must
be an integer expressed in multiples of 24 (48, 72, 96, 120, 144, 168)
between 48 (2 days) and 168 (7 days).
bq update \
--dataset=true --max_time_travel_hours=HOURS \
PROJECT_ID:DATASET_NAME
Replace the following:
PROJECT_ID
: your project IDDATASET_NAME
: the name of the dataset that you're updatingHOURS
with the time travel window's duration in hours
API
Call the
datasets.patch
or
datasets.update
method with a defined
dataset resource in which you
have specified a value for the maxTimeTravelHours
field. The
maxTimeTravelHours
value must be an integer expressed in multiples of 24
(48, 72, 96, 120, 144, 168) between 48 (2 days) and 168 (7 days).
Update storage billing models
You can alter the
storage billing model
for a dataset. Set the storage_billing_model
value to PHYSICAL
to use
physical bytes when calculating storage changes, or to LOGICAL
to use
logical bytes. LOGICAL
is the default.
When you change a dataset's billing model, it takes 24 hours for the change to take effect.
Once you change a dataset's storage billing model, you must wait 14 days before you can change the storage billing model again.
Console
In the left pane, click
Explorer:Highlighted button for the Explorer pane.
In the Explorer pane, expand your project, click Datasets, and then click a dataset.
In the Details tab, click
Edit details.Expand Advanced options.
In the Storage billing model menu, select Physical to use physical storage billing, or select Logical to use logical storage billing. You can also select Storage_billing_model_unspecified.
Click Save.
SQL
To update the billing model for a dataset, use the
ALTER SCHEMA SET OPTIONS
statement
and set the storage_billing_model
option:
In the Google Cloud console, go to the BigQuery page.
In the query editor, enter the following statement:
ALTERSCHEMADATASET_NAME SETOPTIONS( storage_billing_model='BILLING_MODEL');
Replace the following:
DATASET_NAME
with the name of the dataset that you are changingBILLING_MODEL
with the type of storage you want to use, eitherLOGICAL
orPHYSICAL
Click
Run.
For more information about how to run queries, see Run an interactive query.
To update the storage billing model for all datasets in a project, use the following SQL query for every region, where datasets are located:
FORrecordIN (SELECTCONCAT(catalog_name,'.',schema_name)ASdataset_path FROMPROJECT_ID.region-REGION.INFORMATION_SCHEMA.SCHEMATA) DO EXECUTEIMMEDIATE "ALTER SCHEMA `"||record.dataset_path||"` SET OPTIONS(storage_billing_model = 'BILLING_MODEL')"; ENDFOR;
Replace the following:
PROJECT_ID
with your project IDREGION
with a region qualifierBILLING_MODEL
with the type of storage you want to use, eitherLOGICAL
orPHYSICAL
bq
To update the billing model for a dataset, use the
bq update
command
and set the --storage_billing_model
flag:
bqupdate-d--storage_billing_model=BILLING_MODELPROJECT_ID:DATASET_NAME
Replace the following:
PROJECT_ID
: your project IDDATASET_NAME
: the name of the dataset that you're updatingBILLING_MODEL
: the type of storage you want to use, eitherLOGICAL
orPHYSICAL
API
Call the datasets.update
method
with a defined dataset resource
where the storageBillingModel
field is set.
The following example shows how to call datasets.update
using curl
:
curl-H"Authorization: Bearer $(gcloudauthprint-access-token)"-H"Content-Type: application/json"-L-XPUThttps://bigquery.googleapis.com/bigquery/v2/projects/PROJECT_ID/datasets/DATASET_ID-d'{"datasetReference": {"projectId": "PROJECT_ID", "datasetId": "DATASET_NAME"}, "storageBillingModel": "BILLING_MODEL"}'
Replace the following:
PROJECT_ID
: your project IDDATASET_NAME
: the name of the dataset that you're updatingBILLING_MODEL
: the type of storage you want to use, eitherLOGICAL
orPHYSICAL
Update access controls
To control access to datasets in BigQuery, see Controlling access to datasets. For information about data encryption, see Encryption at rest.
What's next
- For more information about creating datasets, see Creating datasets.
- For more information about managing datasets, see Managing datasets.