Gemini API in Vertex AI quickstart
Stay organized with collections
Save and categorize content based on your preferences.
This quickstart shows you how to install the Google Gen AI SDK for your language of choice and then make your first API request. The samples vary slightly based on whether you authenticate to Vertex AI using an API key or application default credentials (ADC).
Choose your authentication method:
Before you begin
If you haven't configured ADC yet, follow these instructions:
Configure your project
Select a project, enable billing, enable the Vertex AI API, and install gcloud CLI:
-
Sign in to your Google Account.
If you don't already have one, sign up for a new account.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
Roles required to select or create a project
- Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
-
Create a project: To create a project, you need the Project Creator
(
roles/resourcemanager.projectCreator), which contains theresourcemanager.projects.createpermission. Learn how to grant roles.
-
Verify that billing is enabled for your Google Cloud project.
-
Enable the Vertex AI API.
Roles required to enable APIs
To enable APIs, you need the Service Usage Admin IAM role (
roles/serviceusage.serviceUsageAdmin), which contains theserviceusage.services.enablepermission. Learn how to grant roles. -
Install the Google Cloud CLI.
-
If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.
-
To initialize the gcloud CLI, run the following command:
gcloudinit
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
Roles required to select or create a project
- Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
-
Create a project: To create a project, you need the Project Creator
(
roles/resourcemanager.projectCreator), which contains theresourcemanager.projects.createpermission. Learn how to grant roles.
-
Verify that billing is enabled for your Google Cloud project.
-
Enable the Vertex AI API.
Roles required to enable APIs
To enable APIs, you need the Service Usage Admin IAM role (
roles/serviceusage.serviceUsageAdmin), which contains theserviceusage.services.enablepermission. Learn how to grant roles. -
Install the Google Cloud CLI.
-
If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.
-
To initialize the gcloud CLI, run the following command:
gcloudinit
Create local authentication credentials
Create local authentication credentials for your user account:
gcloudauthapplication-defaultlogin
If an authentication error is returned, and you are using an external identity provider (IdP), confirm that you have signed in to the gcloud CLI with your federated identity.
Required roles
To get the permissions that
you need to use the Gemini API in Vertex AI,
ask your administrator to grant you the
Vertex AI User (roles/aiplatform.user)
IAM role on your project.
For more information about granting roles, see Manage access to projects, folders, and organizations.
You might also be able to get the required permissions through custom roles or other predefined roles.
Install the SDK and set up your environment
On your local machine, click one of the following tabs to install the SDK for your programming language.
Python Gen AI SDK
Install and update the Gen AI SDK for Python by running this command.
pipinstall--upgradegoogle-genai
Set environment variables:
# Replace the `GOOGLE_CLOUD_PROJECT_ID` and `GOOGLE_CLOUD_LOCATION` values # with appropriate values for your project. exportGOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT_ID exportGOOGLE_CLOUD_LOCATION=global
exportGOOGLE_GENAI_USE_VERTEXAI=True
Go Gen AI SDK
Install and update the Gen AI SDK for Go by running this command.
gogetgoogle.golang.org/genai
Set environment variables:
# Replace the `GOOGLE_CLOUD_PROJECT_ID` and `GOOGLE_CLOUD_LOCATION` values # with appropriate values for your project. exportGOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT_ID exportGOOGLE_CLOUD_LOCATION=global
exportGOOGLE_GENAI_USE_VERTEXAI=True
Node.js Gen AI SDK
Install and update the Gen AI SDK for Node.js by running this command.
npminstall@google/genai
Set environment variables:
# Replace the `GOOGLE_CLOUD_PROJECT_ID` and `GOOGLE_CLOUD_LOCATION` values # with appropriate values for your project. exportGOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT_ID exportGOOGLE_CLOUD_LOCATION=global
exportGOOGLE_GENAI_USE_VERTEXAI=True
Java Gen AI SDK
Install and update the Gen AI SDK for Java by running this command.
Maven
Add the following to your pom.xml:
<dependencies>
<dependency>
<groupId>com.google.genai</groupId>
<artifactId>google-genai</artifactId>
<version>0.7.0</version>
</dependency>
</dependencies>
Set environment variables:
# Replace the `GOOGLE_CLOUD_PROJECT_ID` and `GOOGLE_CLOUD_LOCATION` values # with appropriate values for your project. exportGOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT_ID exportGOOGLE_CLOUD_LOCATION=global
exportGOOGLE_GENAI_USE_VERTEXAI=True
REST
Set environment variables:
GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT_ID GOOGLE_CLOUD_LOCATION=global API_ENDPOINT=YOUR_API_ENDPOINT MODEL_ID="gemini-2.5-flash" GENERATE_CONTENT_API="generateContent"
Make your first request
Use the
generateContent
method to send a request to the Gemini API in Vertex AI:
Python
fromgoogleimport genai
fromgoogle.genai.typesimport HttpOptions
client = genai.Client(http_options=HttpOptions(api_version="v1"))
response = client.models.generate_content(
model="gemini-2.5-flash",
contents="How does AI work?",
)
print(response.text)
# Example response:
# Okay, let's break down how AI works. It's a broad field, so I'll focus on the ...
#
# Here's a simplified overview:
# ...Go
import(
"context"
"fmt"
"io"
"google.golang.org/genai"
)
// generateWithText shows how to generate text using a text prompt.
func generateWithText(w io.Writer) error {
ctx := context.Background()
client, err := genai.NewClient(ctx, &genai.ClientConfig{
HTTPOptions: genai.HTTPOptions{APIVersion: "v1"},
})
if err != nil {
return fmt.Errorf("failed to create genai client: %w", err)
}
resp, err := client.Models.GenerateContent(ctx,
"gemini-2.5-flash",
genai.Text("How does AI work?"),
nil,
)
if err != nil {
return fmt.Errorf("failed to generate content: %w", err)
}
respText := resp.Text()
fmt.Fprintln(w, respText)
// Example response:
// That's a great question! Understanding how AI works can feel like ...
// ...
// **1. The Foundation: Data and Algorithms**
// ...
return nil
}
Node.js
const{GoogleGenAI}=require('@google/genai');
constGOOGLE_CLOUD_PROJECT=process.env.GOOGLE_CLOUD_PROJECT;
constGOOGLE_CLOUD_LOCATION=process.env.GOOGLE_CLOUD_LOCATION||'global';
asyncfunctiongenerateContent(
projectId=GOOGLE_CLOUD_PROJECT,
location=GOOGLE_CLOUD_LOCATION
){
constclient=newGoogleGenAI({
vertexai:true,
project:projectId,
location:location,
});
constresponse=awaitclient.models.generateContent({
model:'gemini-2.5-flash',
contents:'How does AI work?',
});
console.log(response.text);
returnresponse.text;
}Java
importcom.google.genai.Client;
importcom.google.genai.types.GenerateContentResponse;
importcom.google.genai.types.HttpOptions;
public classTextGenerationWithText {
public static void main(String[] args) {
// TODO(developer): Replace these variables before running the sample.
String modelId = "gemini-2.5-flash";
generateContent(modelId);
}
// Generates text with text input
public static String generateContent(String modelId) {
// Initialize client that will be used to send requests. This client only needs to be created
// once, and can be reused for multiple requests.
try (Client client =
Client.builder()
.location("global")
.vertexAI(true)
.httpOptions(HttpOptions.builder().apiVersion("v1").build())
.build()) {
GenerateContentResponse response =
client.models.generateContent(modelId, "How does AI work?", null);
System.out.print(response.text());
// Example response:
// Okay, let's break down how AI works. It's a broad field, so I'll focus on the ...
//
// Here's a simplified overview:
// ...
return response.text();
}
}
}REST
To send this prompt request, run the curl command from the command line or include the REST call in your application.
curl -XPOST -H"Content-Type: application/json" -H"Authorization: Bearer $(gcloudauthprint-access-token)" "https://${API_ENDPOINT}/v1/projects/${GOOGLE_CLOUD_PROJECT}/locations/${GOOGLE_CLOUD_LOCATION}/publishers/google/models/${MODEL_ID}:${GENERATE_CONTENT_API}"-d $'{ "contents": { "role": "user", "parts": { "text": "Explain how AI works in a few words" } } }'
The model returns a response. Note that the response is generated in sections with each section separately evaluated for safety.
Generate images
Gemini can generate and process images conversationally. You can prompt Gemini with text, images, or a combination of both to achieve various image-related tasks, such as image generation and editing. The following code demonstrates how to generate an image based on a descriptive prompt:
You must include responseModalities: ["TEXT", "IMAGE"] in your
configuration. Image-only output is not supported with these models.
Python
fromgoogleimport genai
fromgoogle.genai.typesimport GenerateContentConfig, Modality
fromPILimport Image
fromioimport BytesIO
client = genai.Client()
response = client.models.generate_content(
model="gemini-2.5-flash-image",
contents=("Generate an image of the Eiffel tower with fireworks in the background."),
config=GenerateContentConfig(
response_modalities=[Modality.TEXT, Modality.IMAGE],
candidate_count=1,
safety_settings=[
{"method": "PROBABILITY"},
{"category": "HARM_CATEGORY_DANGEROUS_CONTENT"},
{"threshold": "BLOCK_MEDIUM_AND_ABOVE"},
],
),
)
for part in response.candidates[0].content.parts:
if part.text:
print(part.text)
elif part.inline_data:
image = Image.open(BytesIO((part.inline_data.data)))
image.save("output_folder/example-image-eiffel-tower.png")
# Example response:
# I will generate an image of the Eiffel Tower at night, with a vibrant display of
# colorful fireworks exploding in the dark sky behind it. The tower will be
# illuminated, standing tall as the focal point of the scene, with the bursts of
# light from the fireworks creating a festive atmosphere.Node.js
constfs=require('fs');
const{GoogleGenAI,Modality}=require('@google/genai');
constGOOGLE_CLOUD_PROJECT=process.env.GOOGLE_CLOUD_PROJECT;
constGOOGLE_CLOUD_LOCATION=
process.env.GOOGLE_CLOUD_LOCATION||'us-central1';
asyncfunctiongenerateImage(
projectId=GOOGLE_CLOUD_PROJECT,
location=GOOGLE_CLOUD_LOCATION
){
constclient=newGoogleGenAI({
vertexai:true,
project:projectId,
location:location,
});
constresponse=awaitclient.models.generateContentStream({
model:'gemini-2.5-flash-image',
contents:
'Generate an image of the Eiffel tower with fireworks in the background.',
config:{
responseModalities:[Modality.TEXT,Modality.IMAGE],
},
});
constgeneratedFileNames=[];
letimageIndex=0;
forawait(constchunkofresponse){
consttext=chunk.text;
constdata=chunk.data;
if(text){
console.debug(text);
}elseif(data){
constoutputDir='output-folder';
if(!fs.existsSync(outputDir)){
fs.mkdirSync(outputDir,{recursive:true});
}
constfileName=`${outputDir}/generate_content_streaming_image_${imageIndex++}.png`;
console.debug(`Writingresponseimagetofile:${fileName}.`);
try{
fs.writeFileSync(fileName,data);
generatedFileNames.push(fileName);
}catch(error){
console.error(`Failedtowriteimagefile${fileName}:`,error);
}
}
}
//Exampleresponse:
//IwillgenerateanimageoftheEiffelToweratnight,withavibrantdisplayof
//colorfulfireworksexplodinginthedarkskybehindit.Thetowerwillbe
//illuminated,standingtallasthefocalpointofthescene,withtheburstsof
//lightfromthefireworkscreatingafestiveatmosphere.
returngeneratedFileNames;
}Java
importcom.google.genai.Client;
importcom.google.genai.types.Blob;
importcom.google.genai.types.Candidate;
importcom.google.genai.types.Content;
importcom.google.genai.types.GenerateContentConfig;
importcom.google.genai.types.GenerateContentResponse;
importcom.google.genai.types.Part;
importcom.google.genai.types.SafetySetting;
importjava.awt.image.BufferedImage;
importjava.io.ByteArrayInputStream;
importjava.io.File;
importjava.io.IOException;
importjava.util.ArrayList;
importjava.util.List;
importjavax.imageio.ImageIO;
public classImageGenMmFlashWithText {
public static void main(String[] args) throws IOException {
// TODO(developer): Replace these variables before running the sample.
String modelId = "gemini-2.5-flash-image";
String outputFile = "resources/output/example-image-eiffel-tower.png";
generateContent(modelId, outputFile);
}
// Generates an image with text input
public static void generateContent(String modelId, String outputFile) throws IOException {
// Client Initialization. Once created, it can be reused for multiple requests.
try (Client client = Client.builder().location("global").vertexAI(true).build()) {
GenerateContentConfig contentConfig =
GenerateContentConfig.builder()
.responseModalities("TEXT", "IMAGE")
.candidateCount(1)
.safetySettings(
SafetySetting.builder()
.method("PROBABILITY")
.category("HARM_CATEGORY_DANGEROUS_CONTENT")
.threshold("BLOCK_MEDIUM_AND_ABOVE")
.build())
.build();
GenerateContentResponse response =
client.models.generateContent(
modelId,
"Generate an image of the Eiffel tower with fireworks in the background.",
contentConfig);
// Get parts of the response
List<Part> parts =
response
.candidates()
.flatMap(candidates -> candidates.stream().findFirst())
.flatMap(Candidate::content)
.flatMap(Content::parts)
.orElse(new ArrayList<>());
// For each part print text if present, otherwise read image data if present and
// write it to the output file
for (Part part : parts) {
if (part.text().isPresent()) {
System.out.println(part.text().get());
} else if (part.inlineData().flatMap(Blob::data).isPresent()) {
BufferedImage image =
ImageIO.read(new ByteArrayInputStream(part.inlineData().flatMap(Blob::data).get()));
ImageIO.write(image, "png", new File(outputFile));
}
}
System.out.println("Content written to: " + outputFile);
// Example response:
// Here is the Eiffel Tower with fireworks in the background...
//
// Content written to: resources/output/example-image-eiffel-tower.png
}
}
}Image understanding
Gemini can understand images as well. The following code uses the image generated in the previous section and uses a different model to infer information about the image:
Python
fromgoogleimport genai
fromgoogle.genai.typesimport HttpOptions, Part
client = genai.Client(http_options=HttpOptions(api_version="v1"))
response = client.models.generate_content(
model="gemini-2.5-flash",
contents=[
"What is shown in this image?",
Part.from_uri(
file_uri="gs://cloud-samples-data/generative-ai/image/scones.jpg",
mime_type="image/jpeg",
),
],
)
print(response.text)
# Example response:
# The image shows a flat lay of blueberry scones arranged on parchment paper. There are ...Go
import(
"context"
"fmt"
"io"
genai "google.golang.org/genai"
)
// generateWithTextImage shows how to generate text using both text and image input
func generateWithTextImage(w io.Writer) error {
ctx := context.Background()
client, err := genai.NewClient(ctx, &genai.ClientConfig{
HTTPOptions: genai.HTTPOptions{APIVersion: "v1"},
})
if err != nil {
return fmt.Errorf("failed to create genai client: %w", err)
}
modelName := "gemini-2.5-flash"
contents := []*genai.Content{
{Parts: []*genai.Part{
{Text: "What is shown in this image?"},
{FileData: &genai.FileData{
// Image source: https://storage.googleapis.com/cloud-samples-data/generative-ai/image/scones.jpg
FileURI: "gs://cloud-samples-data/generative-ai/image/scones.jpg",
MIMEType: "image/jpeg",
}},
},
Role: "user"},
}
resp, err := client.Models.GenerateContent(ctx, modelName, contents, nil)
if err != nil {
return fmt.Errorf("failed to generate content: %w", err)
}
respText := resp.Text()
fmt.Fprintln(w, respText)
// Example response:
// The image shows an overhead shot of a rustic, artistic arrangement on a surface that ...
return nil
}
Node.js
const{GoogleGenAI}=require('@google/genai');
constGOOGLE_CLOUD_PROJECT=process.env.GOOGLE_CLOUD_PROJECT;
constGOOGLE_CLOUD_LOCATION=process.env.GOOGLE_CLOUD_LOCATION||'global';
asyncfunctiongenerateContent(
projectId=GOOGLE_CLOUD_PROJECT,
location=GOOGLE_CLOUD_LOCATION
){
constclient=newGoogleGenAI({
vertexai:true,
project:projectId,
location:location,
});
constimage={
fileData:{
fileUri:'gs://cloud-samples-data/generative-ai/image/scones.jpg',
mimeType:'image/jpeg',
},
};
constresponse=awaitclient.models.generateContent({
model:'gemini-2.5-flash',
contents:[image,'What is shown in this image?'],
});
console.log(response.text);
returnresponse.text;
}Java
importcom.google.genai.Client;
importcom.google.genai.types.Content;
importcom.google.genai.types.GenerateContentResponse;
importcom.google.genai.types.HttpOptions;
importcom.google.genai.types.Part;
public classTextGenerationWithTextAndImage {
public static void main(String[] args) {
// TODO(developer): Replace these variables before running the sample.
String modelId = "gemini-2.5-flash";
generateContent(modelId);
}
// Generates text with text and image input
public static String generateContent(String modelId) {
// Initialize client that will be used to send requests. This client only needs to be created
// once, and can be reused for multiple requests.
try (Client client =
Client.builder()
.location("global")
.vertexAI(true)
.httpOptions(HttpOptions.builder().apiVersion("v1").build())
.build()) {
GenerateContentResponse response =
client.models.generateContent(
modelId,
Content.fromParts(
Part.fromText("What is shown in this image?"),
Part.fromUri(
"gs://cloud-samples-data/generative-ai/image/scones.jpg", "image/jpeg")),
null);
System.out.print(response.text());
// Example response:
// The image shows a flat lay of blueberry scones arranged on parchment paper. There are ...
return response.text();
}
}
}Code execution
The Gemini API in Vertex AI code execution feature enables the model to generate and run Python code and learn iteratively from the results until it arrives at a final output. Vertex AI provides code execution as a tool, similar to function calling. You can use this code execution capability to build applications that benefit from code-based reasoning and that produce text output. For example:
Python
fromgoogleimport genai
fromgoogle.genai.typesimport (
HttpOptions,
Tool,
ToolCodeExecution,
GenerateContentConfig,
)
client = genai.Client(http_options=HttpOptions(api_version="v1"))
model_id = "gemini-2.5-flash"
code_execution_tool = Tool(code_execution=ToolCodeExecution())
response = client.models.generate_content(
model=model_id,
contents="Calculate 20th fibonacci number. Then find the nearest palindrome to it.",
config=GenerateContentConfig(
tools=[code_execution_tool],
temperature=0,
),
)
print("# Code:")
print(response.executable_code)
print("# Outcome:")
print(response.code_execution_result)
# Example response:
# # Code:
# def fibonacci(n):
# if n <= 0:
# return 0
# elif n == 1:
# return 1
# else:
# a, b = 0, 1
# for _ in range(2, n + 1):
# a, b = b, a + b
# return b
#
# fib_20 = fibonacci(20)
# print(f'{fib_20=}')
#
# # Outcome:
# fib_20=6765Go
import(
"context"
"fmt"
"io"
genai "google.golang.org/genai"
)
// generateWithCodeExec shows how to generate text using the code execution tool.
func generateWithCodeExec(w io.Writer) error {
ctx := context.Background()
client, err := genai.NewClient(ctx, &genai.ClientConfig{
HTTPOptions: genai.HTTPOptions{APIVersion: "v1"},
})
if err != nil {
return fmt.Errorf("failed to create genai client: %w", err)
}
prompt := "Calculate 20th fibonacci number. Then find the nearest palindrome to it."
contents := []*genai.Content{
{Parts: []*genai.Part{
{Text: prompt},
},
Role: "user"},
}
config := &genai.GenerateContentConfig{
Tools: []*genai.Tool{
{CodeExecution: &genai.ToolCodeExecution{}},
},
Temperature: genai.Ptr(float32(0.0)),
}
modelName := "gemini-2.5-flash"
resp, err := client.Models.GenerateContent(ctx, modelName, contents, config)
if err != nil {
return fmt.Errorf("failed to generate content: %w", err)
}
for _, p := range resp.Candidates[0].Content.Parts {
if p.Text != "" {
fmt.Fprintf(w, "Gemini: %s", p.Text)
}
if p.ExecutableCode != nil {
fmt.Fprintf(w, "Language: %s\n%s\n", p.ExecutableCode.Language, p.ExecutableCode.Code)
}
if p.CodeExecutionResult != nil {
fmt.Fprintf(w, "Outcome: %s\n%s\n", p.CodeExecutionResult.Outcome, p.CodeExecutionResult.Output)
}
}
// Example response:
// Gemini: Okay, I can do that. First, I'll calculate the 20th Fibonacci number. Then, I need ...
//
// Language: PYTHON
//
// deffibonacci(n):
// ...
//
// fib_20 = fibonacci(20)
// print(f'{fib_20=}')
//
// Outcome: OUTCOME_OK
// fib_20=6765
//
// Now that I have the 20th Fibonacci number (6765), I need to find the nearest palindrome. ...
// ...
return nil
}
Node.js
const{GoogleGenAI}=require('@google/genai');
constGOOGLE_CLOUD_PROJECT=process.env.GOOGLE_CLOUD_PROJECT;
constGOOGLE_CLOUD_LOCATION=process.env.GOOGLE_CLOUD_LOCATION||'global';
asyncfunctiongenerateAndExecuteCode(
projectId=GOOGLE_CLOUD_PROJECT,
location=GOOGLE_CLOUD_LOCATION
){
constclient=newGoogleGenAI({
vertexai:true,
project:projectId,
location:location,
});
constresponse=awaitclient.models.generateContent({
model:'gemini-2.5-flash',
contents:
'Calculate 20th fibonacci number. Then find the nearest palindrome to it.',
config:{
tools:[{codeExecution:{}}],
temperature:0,
},
});
console.debug(response.executableCode);
//Exampleresponse:
//Code:
//functionfibonacci(n){
//if(n<=0){
//return0;
//}elseif(n===1){
//return1;
//}else{
//leta=0,b=1;
//for(leti=2;i<=n;i++){
//[a,b]=[b,a+b];
//}
//returnb;
//}
//}
//
//constfib20=fibonacci(20);
//console.log(`fib20=${fib20}`);
console.debug(response.codeExecutionResult);
//Outcome:
//fib20=6765
returnresponse.codeExecutionResult;
}
Java
importcom.google.genai.Client;
importcom.google.genai.types.GenerateContentConfig;
importcom.google.genai.types.GenerateContentResponse;
importcom.google.genai.types.HttpOptions;
importcom.google.genai.types.Tool;
importcom.google.genai.types.ToolCodeExecution;
public classToolsCodeExecWithText {
public static void main(String[] args) {
// TODO(developer): Replace these variables before running the sample.
String modelId = "gemini-2.5-flash";
generateContent(modelId);
}
// Generates text using the Code Execution tool
public static String generateContent(String modelId) {
// Initialize client that will be used to send requests. This client only needs to be created
// once, and can be reused for multiple requests.
try (Client client =
Client.builder()
.location("global")
.vertexAI(true)
.httpOptions(HttpOptions.builder().apiVersion("v1").build())
.build()) {
// Create a GenerateContentConfig and set codeExecution tool
GenerateContentConfig contentConfig =
GenerateContentConfig.builder()
.tools(Tool.builder().codeExecution(ToolCodeExecution.builder().build()).build())
.temperature(0.0F)
.build();
GenerateContentResponse response =
client.models.generateContent(
modelId,
"Calculate 20th fibonacci number. Then find the nearest palindrome to it.",
contentConfig);
System.out.println("Code: \n" + response.executableCode());
System.out.println("Outcome: \n" + response.codeExecutionResult());
// Example response
// Code:
// deffibonacci(n):
// if n <= 0:
// return 0
// elif n == 1:
// return 1
// else:
// a, b = 1, 1
// for _ in range(2, n):
// a, b = b, a + b
// return b
//
// fib_20 = fibonacci(20)
// print(f'{fib_20=}')
//
// Outcome:
// fib_20=6765
return response.executableCode();
}
}
}For more examples of code execution, check out the code execution documentation.
What's next
Now that you made your first API request, you might want to explore the following guides that show how to set up more advanced Vertex AI features for production code: