Text generation

This page shows you how to send chat prompts to a Gemini model by using the Google Cloud console, REST API, and supported SDKs.

To learn how to add images and other media to your request, see Image understanding.

For a list of languages supported by Gemini, see Language support.


To explore the generative AI models and APIs that are available on Vertex AI, go to Model Garden in the Google Cloud console.

Go to Model Garden


If you're looking for a way to use Gemini directly from your mobile and web apps, see the Firebase AI Logic client SDKs for Swift, Android, Web, Flutter, and Unity apps.

Generate text

For testing and iterating on chat prompts, we recommend using the Google Cloud console. To send prompts programmatically to the model, you can use the REST API, Google Gen AI SDK, Vertex AI SDK for Python, or one of the other supported libraries and SDKs.

You can use system instructions to steer the behavior of the model based on a specific need or use case. For example, you can define a persona or role for a chatbot that responds to customer service requests. For more information, see the system instructions code samples.

You can use the Google Gen AI SDK to send requests if you're using Gemini 2.0 Flash.

Here is a simple text generation example.

Python

Install

pip install --upgrade google-genai

To learn more, see the SDK reference documentation.

Set environment variables to use the Gen AI SDK with Vertex AI:

# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values
# with appropriate values for your project.
exportGOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT
exportGOOGLE_CLOUD_LOCATION=global
exportGOOGLE_GENAI_USE_VERTEXAI=True

fromgoogleimport genai
fromgoogle.genai.typesimport HttpOptions
client = genai.Client(http_options=HttpOptions(api_version="v1"))
response = client.models.generate_content(
 model="gemini-2.5-flash",
 contents="How does AI work?",
)
print(response.text)
# Example response:
# Okay, let's break down how AI works. It's a broad field, so I'll focus on the ...
#
# Here's a simplified overview:
# ...

Go

Learn how to install or update the Go.

To learn more, see the SDK reference documentation.

Set environment variables to use the Gen AI SDK with Vertex AI:

# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values
# with appropriate values for your project.
exportGOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT
exportGOOGLE_CLOUD_LOCATION=global
exportGOOGLE_GENAI_USE_VERTEXAI=True

import(
	"context"
	"fmt"
	"io"
	"google.golang.org/genai"
)
// generateWithText shows how to generate text using a text prompt.
func generateWithText(w io.Writer) error {
	ctx := context.Background()
	client, err := genai.NewClient(ctx, &genai.ClientConfig{
		HTTPOptions: genai.HTTPOptions{APIVersion: "v1"},
	})
	if err != nil {
		return fmt.Errorf("failed to create genai client: %w", err)
	}
	resp, err := client.Models.GenerateContent(ctx,
		"gemini-2.5-flash",
		genai.Text("How does AI work?"),
		nil,
	)
	if err != nil {
		return fmt.Errorf("failed to generate content: %w", err)
	}
	respText := resp.Text()
	fmt.Fprintln(w, respText)
	// Example response:
	// That's a great question! Understanding how AI works can feel like ...
	// ...
	// **1. The Foundation: Data and Algorithms**
	// ...
	return nil
}

Node.js

Install

npm install @google/genai

To learn more, see the SDK reference documentation.

Set environment variables to use the Gen AI SDK with Vertex AI:

# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values
# with appropriate values for your project.
exportGOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT
exportGOOGLE_CLOUD_LOCATION=global
exportGOOGLE_GENAI_USE_VERTEXAI=True

const{GoogleGenAI}=require('@google/genai');
constGOOGLE_CLOUD_PROJECT=process.env.GOOGLE_CLOUD_PROJECT;
constGOOGLE_CLOUD_LOCATION=process.env.GOOGLE_CLOUD_LOCATION||'global';
asyncfunctiongenerateContent(
projectId=GOOGLE_CLOUD_PROJECT,
location=GOOGLE_CLOUD_LOCATION
){
constclient=newGoogleGenAI({
vertexai:true,
project:projectId,
location:location,
});
constresponse=awaitclient.models.generateContent({
model:'gemini-2.5-flash',
contents:'How does AI work?',
});
console.log(response.text);
returnresponse.text;
}

Java

Learn how to install or update the Java.

To learn more, see the SDK reference documentation.

Set environment variables to use the Gen AI SDK with Vertex AI:

# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values
# with appropriate values for your project.
exportGOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT
exportGOOGLE_CLOUD_LOCATION=global
exportGOOGLE_GENAI_USE_VERTEXAI=True


importcom.google.genai.Client;
importcom.google.genai.types.GenerateContentResponse;
importcom.google.genai.types.HttpOptions;
public classTextGenerationWithText {
 public static void main(String[] args) {
 // TODO(developer): Replace these variables before running the sample.
 String modelId = "gemini-2.5-flash";
 generateContent(modelId);
 }
 // Generates text with text input
 public static String generateContent(String modelId) {
 // Initialize client that will be used to send requests. This client only needs to be created
 // once, and can be reused for multiple requests.
 try (Client client =
 Client.builder()
 .location("global")
 .vertexAI(true)
 .httpOptions(HttpOptions.builder().apiVersion("v1").build())
 .build()) {
 GenerateContentResponse response =
 client.models.generateContent(modelId, "How does AI work?", null);
 System.out.print(response.text());
 // Example response:
 // Okay, let's break down how AI works. It's a broad field, so I'll focus on the ...
 //
 // Here's a simplified overview:
 // ...
 return response.text();
 }
 }
}

Streaming and non-streaming responses

You can choose whether the model generates streaming responses or non-streaming responses. For streaming responses, you receive each response as soon as its output token is generated. For non-streaming responses, you receive all responses after all of the output tokens are generated.

Here is a streaming text generation example.

Python

Before trying this sample, follow the Python setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Python API reference documentation.

To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.

fromgoogleimport genai
fromgoogle.genai.typesimport HttpOptions
client = genai.Client(http_options=HttpOptions(api_version="v1"))
chat_session = client.chats.create(model="gemini-2.5-flash")
for chunk in chat_session.send_message_stream("Why is the sky blue?"):
 print(chunk.text, end="")
# Example response:
# The
# sky appears blue due to a phenomenon called **Rayleigh scattering**. Here's
# a breakdown of why:
# ...

Gemini multiturn chat behavior

When you use multiturn chat, Vertex AI locally stores the initial content and prompts that you sent to the model. Vertex AI sends all of this data with each subsequent request to the model. Consequently, the input costs for each message that you send is a running total of all the data that was already sent to the model. If your initial content is sufficiently large, consider using context caching when you create the initial model object to better control input costs.

What's next

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025年11月07日 UTC.