Detect explicit content (SafeSearch)

SafeSearch Detection detects explicit content such as adult content or violent content within an image. This feature uses five categories (adult, spoof, medical, violence, and racy) and returns the likelihood that each is present in a given image. See the SafeSearchAnnotation page for details on these fields.

SafeSearch detection requests

Set up your Google Cloud project and authentication

If you have not created a Google Cloud project, do so now. Expand this section for instructions.

  1. Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get 300ドル in free credits to run, test, and deploy workloads.
  2. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Roles required to select or create a project

    • Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
    • Create a project: To create a project, you need the Project Creator (roles/resourcemanager.projectCreator), which contains the resourcemanager.projects.create permission. Learn how to grant roles.

    Go to project selector

  3. Verify that billing is enabled for your Google Cloud project.

  4. Enable the Vision API.

    Roles required to enable APIs

    To enable APIs, you need the Service Usage Admin IAM role (roles/serviceusage.serviceUsageAdmin), which contains the serviceusage.services.enable permission. Learn how to grant roles.

    Enable the API

  5. Install the Google Cloud CLI.

  6. If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.

  7. To initialize the gcloud CLI, run the following command:

    gcloudinit
  8. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Roles required to select or create a project

    • Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
    • Create a project: To create a project, you need the Project Creator (roles/resourcemanager.projectCreator), which contains the resourcemanager.projects.create permission. Learn how to grant roles.

    Go to project selector

  9. Verify that billing is enabled for your Google Cloud project.

  10. Enable the Vision API.

    Roles required to enable APIs

    To enable APIs, you need the Service Usage Admin IAM role (roles/serviceusage.serviceUsageAdmin), which contains the serviceusage.services.enable permission. Learn how to grant roles.

    Enable the API

  11. Install the Google Cloud CLI.

  12. If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.

  13. To initialize the gcloud CLI, run the following command:

    gcloudinit

Explicit content detection on a local image

You can use the Vision API to perform feature detection on a local image file.

For REST requests, send the contents of the image file as a base64 encoded string in the body of your request.

For gcloud and client library requests, specify the path to a local image in your request.

REST

Before using any of the request data, make the following replacements:

  • BASE64_ENCODED_IMAGE: The base64 representation (ASCII string) of your binary image data. This string should look similar to the following string:
    • /9j/4QAYRXhpZgAA...9tAVx/zDQDlGxn//2Q==
    Visit the base64 encode topic for more information.
  • PROJECT_ID: Your Google Cloud project ID.

HTTP method and URL:

POST https://vision.googleapis.com/v1/images:annotate

Request JSON body:

{
 "requests": [
 {
 "image": {
 "content": "BASE64_ENCODED_IMAGE"
 },
 "features": [
 {
 "type": "SAFE_SEARCH_DETECTION"
 },
 ]
 }
 ]
}

To send your request, choose one of these options:

curl

Save the request body in a file named request.json, and execute the following command:

curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "x-goog-user-project: PROJECT_ID" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://vision.googleapis.com/v1/images:annotate"

PowerShell

Save the request body in a file named request.json, and execute the following command:

$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred"; "x-goog-user-project" = "PROJECT_ID" }

Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://vision.googleapis.com/v1/images:annotate" | Select-Object -Expand Content

You should receive a JSON response similar to the following:

{
 "responses": [
 {
 "safeSearchAnnotation": {
 "adult": "UNLIKELY",
 "spoof": "VERY_UNLIKELY",
 "medical": "VERY_UNLIKELY",
 "violence": "LIKELY",
 "racy": "POSSIBLE"
 }
 }
 ]
}

Go

Before trying this sample, follow the Go setup instructions in the Vision quickstart using client libraries. For more information, see the Vision Go API reference documentation.

To authenticate to Vision, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.


// detectSafeSearch gets image properties from the Vision API for an image at the given file path.
funcdetectSafeSearch(wio.Writer,filestring)error{
ctx:=context.Background()
client,err:=vision.NewImageAnnotatorClient(ctx)
iferr!=nil{
returnerr
}
f,err:=os.Open(file)
iferr!=nil{
returnerr
}
deferf.Close()
image,err:=vision.NewImageFromReader(f)
iferr!=nil{
returnerr
}
props,err:=client.DetectSafeSearch(ctx,image,nil)
iferr!=nil{
returnerr
}
fmt.Fprintln(w,"Safe Search properties:")
fmt.Fprintln(w,"Adult:",props.Adult)
fmt.Fprintln(w,"Medical:",props.Medical)
fmt.Fprintln(w,"Racy:",props.Racy)
fmt.Fprintln(w,"Spoofed:",props.Spoof)
fmt.Fprintln(w,"Violence:",props.Violence)
returnnil
}

Java

Before trying this sample, follow the Java setup instructions in the Vision API Quickstart Using Client Libraries. For more information, see the Vision API Java reference documentation.


importcom.google.cloud.vision.v1.AnnotateImageRequest ;
importcom.google.cloud.vision.v1.AnnotateImageResponse ;
importcom.google.cloud.vision.v1.BatchAnnotateImagesResponse ;
importcom.google.cloud.vision.v1.Feature ;
importcom.google.cloud.vision.v1.Image ;
importcom.google.cloud.vision.v1.ImageAnnotatorClient ;
importcom.google.cloud.vision.v1.SafeSearchAnnotation ;
importcom.google.protobuf.ByteString ;
importjava.io.FileInputStream;
importjava.io.IOException;
importjava.util.ArrayList;
importjava.util.List;
publicclass DetectSafeSearch{
publicstaticvoiddetectSafeSearch()throwsIOException{
// TODO(developer): Replace these variables before running the sample.
StringfilePath="path/to/your/image/file.jpg";
detectSafeSearch(filePath);
}
// Detects whether the specified image has features you would want to moderate.
publicstaticvoiddetectSafeSearch(StringfilePath)throwsIOException{
List<AnnotateImageRequest>requests=newArrayList<>();
ByteString imgBytes=ByteString .readFrom (newFileInputStream(filePath));
Image img=Image .newBuilder().setContent(imgBytes).build();
Feature feat=Feature .newBuilder().setType(Feature .Type.SAFE_SEARCH_DETECTION).build();
AnnotateImageRequest request=
AnnotateImageRequest .newBuilder().addFeatures(feat).setImage(img).build();
requests.add(request);
// Initialize client that will be used to send requests. This client only needs to be created
// once, and can be reused for multiple requests. After completing all of your requests, call
// the "close" method on the client to safely clean up any remaining background resources.
try(ImageAnnotatorClient client=ImageAnnotatorClient .create()){
BatchAnnotateImagesResponse response=client.batchAnnotateImages(requests);
List<AnnotateImageResponse>responses=response.getResponsesList ();
for(AnnotateImageResponse res:responses){
if(res.hasError()){
System.out.format("Error: %s%n",res.getError().getMessage());
return;
}
// For full list of available annotations, see http://g.co/cloud/vision/docs
SafeSearchAnnotation annotation=res.getSafeSearchAnnotation();
System.out.format(
"adult: %s%nmedical: %s%nspoofed: %s%nviolence: %s%nracy: %s%n",
annotation.getAdult (),
annotation.getMedical (),
annotation.getSpoof (),
annotation.getViolence (),
annotation.getRacy ());
}
}
}
}

Node.js

Before trying this sample, follow the Node.js setup instructions in the Vision quickstart using client libraries. For more information, see the Vision Node.js API reference documentation.

To authenticate to Vision, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.

constvision=require('@google-cloud/vision');
// Creates a client
constclient=newvision.ImageAnnotatorClient ();
/**
 * TODO(developer): Uncomment the following line before running the sample.
 */
// const fileName = 'Local image file, e.g. /path/to/image.png';
// Performs safe search detection on the local file
const[result]=awaitclient.safeSearchDetection(fileName);
constdetections=result .safeSearchAnnotation;
console.log('Safe search:');
console.log(`Adult: ${detections.adult}`);
console.log(`Medical: ${detections.medical}`);
console.log(`Spoof: ${detections.spoof}`);
console.log(`Violence: ${detections.violence}`);
console.log(`Racy: ${detections.racy}`);

Python

Before trying this sample, follow the Python setup instructions in the Vision quickstart using client libraries. For more information, see the Vision Python API reference documentation.

To authenticate to Vision, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.

defdetect_safe_search(path):
"""Detects unsafe features in the file."""
 fromgoogle.cloudimport vision
 client = vision.ImageAnnotatorClient ()
 with open(path, "rb") as image_file:
 content = image_file.read()
 image = vision.Image (content=content)
 response = client.safe_search_detection(image=image)
 safe = response.safe_search_annotation
 # Names of likelihood from google.cloud.vision.enums
 likelihood_name = (
 "UNKNOWN",
 "VERY_UNLIKELY",
 "UNLIKELY",
 "POSSIBLE",
 "LIKELY",
 "VERY_LIKELY",
 )
 print("Safe search:")
 print(f"adult: {likelihood_name[safe.adult]}")
 print(f"medical: {likelihood_name[safe.medical]}")
 print(f"spoofed: {likelihood_name[safe.spoof]}")
 print(f"violence: {likelihood_name[safe.violence]}")
 print(f"racy: {likelihood_name[safe.racy]}")
 if response.error.message:
 raise Exception(
 "{}\nFor more info on error messages, check: "
 "https://cloud.google.com/apis/design/errors".format(response.error.message)
 )

Explicit content detection on a remote image

You can use the Vision API to perform feature detection on a remote image file that is located in Cloud Storage or on the Web. To send a remote file request, specify the file's Web URL or Cloud Storage URI in the request body.

REST

Before using any of the request data, make the following replacements:

  • CLOUD_STORAGE_IMAGE_URI: the path to a valid image file in a Cloud Storage bucket. You must at least have read privileges to the file. Example:
    • gs://my-storage-bucket/img/image1.png
  • PROJECT_ID: Your Google Cloud project ID.

HTTP method and URL:

POST https://vision.googleapis.com/v1/images:annotate

Request JSON body:

{
 "requests": [
 {
 "image": {
 "source": {
 "imageUri": "CLOUD_STORAGE_IMAGE_URI"
 }
 },
 "features": [
 {
 "type": "SAFE_SEARCH_DETECTION"
 }
 ]
 }
 ]
}

To send your request, choose one of these options:

curl

Save the request body in a file named request.json, and execute the following command:

curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "x-goog-user-project: PROJECT_ID" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://vision.googleapis.com/v1/images:annotate"

PowerShell

Save the request body in a file named request.json, and execute the following command:

$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred"; "x-goog-user-project" = "PROJECT_ID" }

Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://vision.googleapis.com/v1/images:annotate" | Select-Object -Expand Content

You should receive a JSON response similar to the following:

{
 "responses": [
 {
 "safeSearchAnnotation": {
 "adult": "UNLIKELY",
 "spoof": "VERY_UNLIKELY",
 "medical": "VERY_UNLIKELY",
 "violence": "LIKELY",
 "racy": "POSSIBLE"
 }
 }
 ]
}

Go

Before trying this sample, follow the Go setup instructions in the Vision quickstart using client libraries. For more information, see the Vision Go API reference documentation.

To authenticate to Vision, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.


// detectSafeSearch gets image properties from the Vision API for an image at the given file path.
funcdetectSafeSearchURI(wio.Writer,filestring)error{
ctx:=context.Background()
client,err:=vision.NewImageAnnotatorClient(ctx)
iferr!=nil{
returnerr
}
image:=vision.NewImageFromURI(file)
props,err:=client.DetectSafeSearch(ctx,image,nil)
iferr!=nil{
returnerr
}
fmt.Fprintln(w,"Safe Search properties:")
fmt.Fprintln(w,"Adult:",props.Adult)
fmt.Fprintln(w,"Medical:",props.Medical)
fmt.Fprintln(w,"Racy:",props.Racy)
fmt.Fprintln(w,"Spoofed:",props.Spoof)
fmt.Fprintln(w,"Violence:",props.Violence)
returnnil
}

Java

Before trying this sample, follow the Java setup instructions in the Vision quickstart using client libraries. For more information, see the Vision Java API reference documentation.

To authenticate to Vision, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.


importcom.google.cloud.vision.v1.AnnotateImageRequest ;
importcom.google.cloud.vision.v1.AnnotateImageResponse ;
importcom.google.cloud.vision.v1.BatchAnnotateImagesResponse ;
importcom.google.cloud.vision.v1.Feature ;
importcom.google.cloud.vision.v1.Feature.Type;
importcom.google.cloud.vision.v1.Image ;
importcom.google.cloud.vision.v1.ImageAnnotatorClient ;
importcom.google.cloud.vision.v1.ImageSource ;
importcom.google.cloud.vision.v1.SafeSearchAnnotation ;
importjava.io.IOException;
importjava.util.ArrayList;
importjava.util.List;
publicclass DetectSafeSearchGcs{
publicstaticvoiddetectSafeSearchGcs()throwsIOException{
// TODO(developer): Replace these variables before running the sample.
StringfilePath="gs://your-gcs-bucket/path/to/image/file.jpg";
detectSafeSearchGcs(filePath);
}
// Detects whether the specified image on Google Cloud Storage has features you would want to
// moderate.
publicstaticvoiddetectSafeSearchGcs(StringgcsPath)throwsIOException{
List<AnnotateImageRequest>requests=newArrayList<>();
ImageSource imgSource=ImageSource .newBuilder().setGcsImageUri (gcsPath).build();
Image img=Image .newBuilder().setSource (imgSource).build();
Feature feat=Feature .newBuilder().setType(Type.SAFE_SEARCH_DETECTION).build();
AnnotateImageRequest request=
AnnotateImageRequest .newBuilder().addFeatures(feat).setImage(img).build();
requests.add(request);
// Initialize client that will be used to send requests. This client only needs to be created
// once, and can be reused for multiple requests. After completing all of your requests, call
// the "close" method on the client to safely clean up any remaining background resources.
try(ImageAnnotatorClient client=ImageAnnotatorClient .create()){
BatchAnnotateImagesResponse response=client.batchAnnotateImages(requests);
List<AnnotateImageResponse>responses=response.getResponsesList ();
for(AnnotateImageResponse res:responses){
if(res.hasError()){
System.out.format("Error: %s%n",res.getError().getMessage());
return;
}
// For full list of available annotations, see http://g.co/cloud/vision/docs
SafeSearchAnnotation annotation=res.getSafeSearchAnnotation();
System.out.format(
"adult: %s%nmedical: %s%nspoofed: %s%nviolence: %s%nracy: %s%n",
annotation.getAdult (),
annotation.getMedical (),
annotation.getSpoof (),
annotation.getViolence (),
annotation.getRacy ());
}
}
}
}

Node.js

Before trying this sample, follow the Node.js setup instructions in the Vision quickstart using client libraries. For more information, see the Vision Node.js API reference documentation.

To authenticate to Vision, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.

// Imports the Google Cloud client libraries
constvision=require('@google-cloud/vision');
// Creates a client
constclient=newvision.ImageAnnotatorClient ();
/**
 * TODO(developer): Uncomment the following lines before running the sample.
 */
// const bucketName = 'Bucket where the file resides, e.g. my-bucket';
// const fileName = 'Path to file within bucket, e.g. path/to/image.png';
// Performs safe search property detection on the remote file
const[result]=awaitclient.safeSearchDetection(
`gs://${bucketName}/${fileName}`
);
constdetections=result .safeSearchAnnotation;
console.log(`Adult: ${detections.adult}`);
console.log(`Spoof: ${detections.spoof}`);
console.log(`Medical: ${detections.medical}`);
console.log(`Violence: ${detections.violence}`);

Python

Before trying this sample, follow the Python setup instructions in the Vision quickstart using client libraries. For more information, see the Vision Python API reference documentation.

To authenticate to Vision, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.

defdetect_safe_search_uri(uri):
"""Detects unsafe features in the file located in Google Cloud Storage or
 on the Web."""
 fromgoogle.cloudimport vision
 client = vision.ImageAnnotatorClient ()
 image = vision.Image ()
 image.source.image_uri = uri
 response = client.safe_search_detection(image=image)
 safe = response.safe_search_annotation
 # Names of likelihood from google.cloud.vision.enums
 likelihood_name = (
 "UNKNOWN",
 "VERY_UNLIKELY",
 "UNLIKELY",
 "POSSIBLE",
 "LIKELY",
 "VERY_LIKELY",
 )
 print("Safe search:")
 print(f"adult: {likelihood_name[safe.adult]}")
 print(f"medical: {likelihood_name[safe.medical]}")
 print(f"spoofed: {likelihood_name[safe.spoof]}")
 print(f"violence: {likelihood_name[safe.violence]}")
 print(f"racy: {likelihood_name[safe.racy]}")
 if response.error.message:
 raise Exception(
 "{}\nFor more info on error messages, check: "
 "https://cloud.google.com/apis/design/errors".format(response.error.message)
 )

gcloud

To perform SafeSearch detection, use the gcloud ml vision detect-safe-search command as shown in the following example:

gcloud ml vision detect-safe-search gs://my_bucket/input_file

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025年10月31日 UTC.