Downloading, preprocessing, and uploading the COCO dataset

COCO is a large-scale object detection, segmentation, and captioning dataset. Machine learning models that use the COCO dataset include:

  • Mask-RCNN
  • Retinanet
  • ShapeMask

Before you can train a model on a Cloud TPU, you must prepare the training data.

This document describes how to prepare the COCO dataset for models that run on Cloud TPU. The COCO dataset can only be prepared after you have created a Compute Engine VM. The script used to prepare the data, download_and_preprocess_coco.sh, is installed on the VM and must be run on the VM.

After preparing the data by running the download_and_preprocess_coco.sh script, you can bring up the Cloud TPU and run the training.

To fully download and preprocess and upload the COCO dataset to a Cloud Storage bucket takes approximately 2 hours.

  1. In your Cloud Shell, configure gcloud with your project ID.

    exportPROJECT_ID=project-id
    gcloudconfigsetproject${PROJECT_ID}
  2. In your Cloud Shell, create a Cloud Storage bucket using the following command:

    gcloudstoragebucketscreategs://bucket-name--project=${PROJECT_ID}--location=us-central2
  3. Create a Compute Engine VM to download and preprocess the dataset. For more information, see Create and start a Compute Engine instance.

    $gcloudcomputeinstancescreatevm-name\
    --zone=us-central2-b\
    --image-family=ubuntu-2204-lts\
    --image-project=ubuntu-os-cloud\
    --machine-type=n1-standard-16\
    --boot-disk-size=300GB\
    --scopes=https://www.googleapis.com/auth/cloud-platform
  4. Connect to the Compute Engine VM using SSH:

    $gcloudcomputesshvm-name--zone=us-central2-b

    When you connect to the VM, your shell prompt changes from username@projectname to username@vm-name.

  5. Set up two variables, one for the storage bucket you created earlier and one for the directory that holds the training data (DATA_DIR) on the storage bucket.

    (vm)$exportSTORAGE_BUCKET=gs://bucket-name
    (vm)$exportDATA_DIR=${STORAGE_BUCKET}/coco
  6. Install the packages needed to pre-process the data.

    (vm)$sudoapt-getupdate&&\
    sudoapt-getinstallpython3-pip&&\
    sudoapt-getinstall-ypython3-tk&&\
    pip3install--userCythonmatplotlibopencv-python-headlesspyyamlPillownumpyabsl-pytensorflow&&\
    pip3install--user"git+https://github.com/cocodataset/cocoapi#egg=pycocotools&subdirectory=PythonAPI"&&\
    pip3installprotobuf==3.19.0tensorflow==2.11.0numpy==1.26.4
  7. Run the download_and_preprocess_coco.sh script to convert the COCO dataset into a set of TFRecord files (*.tfrecord) that the training application expects.

    (vm)$gitclonehttps://github.com/tensorflow/tpu.git
    (vm)$sudo-Ebashtpu/tools/datasets/download_and_preprocess_coco.sh./data/dir/coco

    This installs the required libraries and then runs the preprocessing script. It outputs *.tfrecord files in your local data directory. The COCO download and conversion script takes approximately one hour to complete.

  8. Copy the data to your Cloud Storage bucket.

    After you convert the data into the TFRecord format, copy the data from local storage to your Cloud Storage bucket using the gcloud CLI. You must also copy the annotation files. These files help validate the model's performance.

    (vm)$gcloudstoragecp./data/dir/coco/*.tfrecord${DATA_DIR}
    (vm)$gcloudstoragecp./data/dir/coco/raw-data/annotations/*.json${DATA_DIR}

Clean up

Follow these steps to clean up your Compute Engine and Cloud Storage resources.

  1. Disconnect from the Compute Engine VM:

    (vm)$exit
  2. Delete your Compute Engine VM:

    $gcloudcomputeinstancesdeletevm-name\
    --zone=us-central2-b
  3. Delete your Cloud Storage bucket and its contents:

    $gcloudstoragerm-rgs://bucket-name
    $gcloudstoragebucketsdeletegs://bucket-name

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025年11月24日 UTC.