Gemini Developer API

The Gemini Developer API gives you access to Google's Gemini models, letting you build cutting-edge generative AI features into your Android apps—including conversational chat, image generation (with Nano Banana), and text generation based on text, image, audio, and video input.

To access the Gemini Pro and Flash models, you can use the Gemini Developer API with Firebase AI Logic. It lets you get started without requiring a credit card, and provides a generous free tier. Once you validate your integration with a small user base, you can scale by switching to the paid tier.

Illustration of an Android App that contains a Firebase Android SDK. An arrow points from the SDK to Firebase within a Cloud environment. From Firebase, another arrow points to Gemini Developer API, which is connected to Gemini Pro & Flash, also within the Cloud.
Figure 1. Firebase AI Logic integration architecture to access the Gemini Developer API.

Getting started

Before you interact with the Gemini API directly from your app, you'll need to do a few things first, including getting familiar with prompting as well as setting up Firebase and your app to use the SDK.

Experiment with prompts

Experimenting with prompts can help you find the best phrasing, content, and format for your Android app. Google AI Studio is an Integrated Development Environment (IDE) that you can use to prototype and design prompts for your app's use cases.

Creating effective prompts for your use case involves extensive experimentation, which is a critical part of the process. You can learn more about prompting in the Firebase documentation.

Once you are happy with your prompt, click the <> button to get code snippets that you can add to your code.

Set up a Firebase project and connect your app to Firebase

Once you're ready to call the API from your app, follow the instructions in "Step 1" of the Firebase AI Logic getting started guide to set up Firebase and the SDK in your app.

Add the Gradle dependency

Add the following Gradle dependency to your app module:

Kotlin

dependencies{
// ... other androidx dependencies
// Import the BoM for the Firebase platform
implementation(platform("com.google.firebase:firebase-bom:34.7.0"))
// Add the dependency for the Firebase AI Logic library When using the BoM,
// you don't specify versions in Firebase library dependencies
implementation("com.google.firebase:firebase-ai")
}

Java

dependencies{
// Import the BoM for the Firebase platform
implementation(platform("com.google.firebase:34.7.0"))
// Add the dependency for the Firebase AI Logic library When using the BoM,
// you don't specify versions in Firebase library dependencies
implementation("com.google.firebase:firebase-ai")
// Required for one-shot operations (to use `ListenableFuture` from Guava
// Android)
implementation("com.google.guava:guava:31.0.1-android")
// Required for streaming operations (to use `Publisher` from Reactive
// Streams)
implementation("org.reactivestreams:reactive-streams:1.0.4")
}

Initialize the generative model

Start by instantiating a GenerativeModel and specifying the model name:

Kotlin

// Start by instantiating a GenerativeModel and specifying the model name:
valmodel=Firebase.ai(backend=GenerativeBackend.googleAI())
.generativeModel("gemini-2.5-flash")

Java

GenerativeModelfirebaseAI=FirebaseAI.getInstance(GenerativeBackend.googleAI())
.generativeModel("gemini-2.5-flash");
GenerativeModelFuturesmodel=GenerativeModelFutures.from(firebaseAI);

Learn more about the available models for use with the Gemini Developer API. You can also learn more about configuring model parameters.

Interact with the Gemini Developer API from your app

Now that you've set up Firebase and your app to use the SDK, you're ready to interact with the Gemini Developer API from your app.

Generate text

To generate a text response, call generateContent() with your prompt.

Kotlin

scope.launch{
valresponse=model.generateContent("Write a story about a magic backpack.")
}

Java

Contentprompt=newContent.Builder()
.addText("Write a story about a magic backpack.")
.build();
ListenableFuture<GenerateContentResponse>response=model.generateContent(prompt);
Futures.addCallback(response,newFutureCallback<GenerateContentResponse>(){
@Override
publicvoidonSuccess(GenerateContentResponseresult){
StringresultText=result.getText();
}
@Override
publicvoidonFailure(Throwablet){
t.printStackTrace();
}
},executor);

Generate text from images and other media

You can also generate text from a prompt that includes text plus images or other media. When you call generateContent(), you can pass the media as inline data.

For example, to use a bitmap, use the image content type:

Kotlin

scope.launch{
valresponse=model.generateContent(
content{
image(bitmap)
text("what is the object in the picture?")
}
)
}

Java

Contentcontent=newContent.Builder()
.addImage(bitmap)
.addText("what is the object in the picture?")
.build();
ListenableFuture<GenerateContentResponse>response=model.generateContent(content);
Futures.addCallback(response,newFutureCallback<GenerateContentResponse>(){
@Override
publicvoidonSuccess(GenerateContentResponseresult){
StringresultText=result.getText();
}
@Override
publicvoidonFailure(Throwablet){
t.printStackTrace();
}
},executor);

To pass an audio file, use the inlineData content type:

Kotlin

scope.launch{
valcontentResolver=applicationContext.contentResolver
contentResolver.openInputStream(audioUri).use{stream->
stream?.let{
valbytes=it.readBytes()
valprompt=content{
inlineData(bytes,"audio/mpeg")// Specify the appropriate audio MIME type
text("Transcribe this audio recording.")
}
valresponse=model.generateContent(prompt)
}
}
}

Java

ContentResolverresolver=applicationContext.getContentResolver();
try(InputStreamstream=resolver.openInputStream(audioUri)){
FileaudioFile=newFile(newURI(audioUri.toString()));
intaudioSize=(int)audioFile.length();
byte[]audioBytes=newbyte[audioSize];
if(stream!=null){
stream.read(audioBytes,0,audioBytes.length);
stream.close();
// Provide a prompt that includes audio specified earlier and text
Contentprompt=newContent.Builder()
.addInlineData(audioBytes,"audio/mpeg")// Specify the appropriate audio MIME type
.addText("Transcribe what's said in this audio recording.")
.build();
// To generate text output, call `generateContent` with the prompt
ListenableFuture<GenerateContentResponse>response=model.generateContent(prompt);
Futures.addCallback(response,newFutureCallback<GenerateContentResponse>(){
@Override
publicvoidonSuccess(GenerateContentResponseresult){
Stringtext=result.getText();
Log.d(TAG,(text==null)?"":text);
}
@Override
publicvoidonFailure(Throwablet){
Log.e(TAG,"Failed to generate a response",t);
}
},executor);
}else{
Log.e(TAG,"Error getting input stream for file.");
// Handle the error appropriately
}
}catch(IOExceptione){
Log.e(TAG,"Failed to read the audio file",e);
}catch(URISyntaxExceptione){
Log.e(TAG,"Invalid audio file",e);
}

And to provide a video file, continue using the inlineData content type:

Kotlin

scope.launch{
valcontentResolver=applicationContext.contentResolver
contentResolver.openInputStream(videoUri).use{stream->
stream?.let{
valbytes=it.readBytes()
valprompt=content{
inlineData(bytes,"video/mp4")// Specify the appropriate video MIME type
text("Describe the content of this video")
}
valresponse=model.generateContent(prompt)
}
}
}

Java

ContentResolverresolver=applicationContext.getContentResolver();
try(InputStreamstream=resolver.openInputStream(videoUri)){
FilevideoFile=newFile(newURI(videoUri.toString()));
intvideoSize=(int)videoFile.length();
byte[]videoBytes=newbyte[videoSize];
if(stream!=null){
stream.read(videoBytes,0,videoBytes.length);
stream.close();
// Provide a prompt that includes video specified earlier and text
Contentprompt=newContent.Builder()
.addInlineData(videoBytes,"video/mp4")
.addText("Describe the content of this video")
.build();
// To generate text output, call generateContent with the prompt
ListenableFuture<GenerateContentResponse>response=model.generateContent(prompt);
Futures.addCallback(response,newFutureCallback<GenerateContentResponse>(){
@Override
publicvoidonSuccess(GenerateContentResponseresult){
StringresultText=result.getText();
System.out.println(resultText);
}
@Override
publicvoidonFailure(Throwablet){
t.printStackTrace();
}
},executor);
}
}catch(IOExceptione){
e.printStackTrace();
}catch(URISyntaxExceptione){
e.printStackTrace();
}

Similarly, you can also pass PDF (application/pdf) and plain text (text/plain) documents by passing their respective MIME Type as a parameter.

Multi-turn chat

You can also support multi-turn conversations. Initialize a chat with the startChat() function. You can optionally provide the model with a message history. Then call the sendMessage() function to send chat messages.

Kotlin

valchat=model.startChat(
history=listOf(
content(role="user"){text("Hello, I have 2 dogs in my house.")},
content(role="model"){text("Great to meet you. What would you like to know?")}
)
)
scope.launch{
valresponse=chat.sendMessage("How many paws are in my house?")
}

Java

Content.BuilderuserContentBuilder=newContent.Builder();
userContentBuilder.setRole("user");
userContentBuilder.addText("Hello, I have 2 dogs in my house.");
ContentuserContent=userContentBuilder.build();
Content.BuildermodelContentBuilder=newContent.Builder();
modelContentBuilder.setRole("model");
modelContentBuilder.addText("Great to meet you. What would you like to know?");
ContentmodelContent=modelContentBuilder.build();
List<Content>history=Arrays.asList(userContent,modelContent);
// Initialize the chat
ChatFutureschat=model.startChat(history);
// Create a new user message
Content.BuildermessageBuilder=newContent.Builder();
messageBuilder.setRole("user");
messageBuilder.addText("How many paws are in my house?");
Contentmessage=messageBuilder.build();
// Send the message
ListenableFuture<GenerateContentResponse>response=chat.sendMessage(message);
Futures.addCallback(response,newFutureCallback<GenerateContentResponse>(){
@Override
publicvoidonSuccess(GenerateContentResponseresult){
StringresultText=result.getText();
System.out.println(resultText);
}
@Override
publicvoidonFailure(Throwablet){
t.printStackTrace();
}
},executor);

Generate images on Android with Nano Banana

The Gemini 2.5 Flash Image model (a.k.a Nano Banana) can generate and edit images leveraging world knowledge and reasoning. It generates contextually relevant images, seamlessly blending or interleaving text and image outputs. It can also generate accurate visuals with long text sequences and supports conversational image editing while maintaining context.

As an alternative to Gemini, you can use Imagen models, especially for high-quality image generation that requires photorealism, artistic detail, or specific styles. However, for the majority of client-side use cases for Android apps, Gemini will be more than sufficient.

This guide describes how to use the Gemini 2.5 Flash Image model (Nano Banana) using the Firebase AI Logic SDK for Android. For more details on generating images with Gemini, see the Generate images with Gemini on Firebase documentation. If you're interested in using Imagen models, check out the documentation.

Google AI Studio interface showing a text input field with the prompt 'A hyper realistic picture of a t-rex with a blue bag pack roaming a pre-historic forest.' and a generated image of a t-rex in a forest with a blue backpack.
Figure 2. Use Google AI Studio to refine your Nano Banana image generation prompts for Android

Initialize the generative model

Instantiate a GenerativeModel and specify the model name gemini-2.5-flash-image-preview. Verify that you configure responseModalities to include both TEXT and IMAGE.

Kotlin

valmodel=Firebase.ai(backend=GenerativeBackend.googleAI()).generativeModel(
modelName="gemini-2.5-flash-image-preview",
// Configure the model to respond with text and images (required)
generationConfig=generationConfig{
responseModalities=listOf(
ResponseModality.TEXT,
ResponseModality.IMAGE
)
}
)

Java

GenerativeModelai=FirebaseAI.getInstance(GenerativeBackend.googleAI()).generativeModel(
"gemini-2.5-flash-image-preview",
// Configure the model to respond with text and images (required)
newGenerationConfig.Builder()
.setResponseModalities(Arrays.asList(ResponseModality.TEXT,ResponseModality.IMAGE))
.build()
);
GenerativeModelFuturesmodel=GenerativeModelFutures.from(ai);

Generate images (text-only input)

You can instruct a Gemini model to generate images by providing a text-only prompt:

Kotlin

scope.launch{
// Provide a text prompt instructing the model to generate an image
valprompt=
"A hyper realistic picture of a t-rex with a blue bag pack roaming a pre-historic forest."
// To generate image output, call `generateContent` with the text input
valgeneratedImageAsBitmap:Bitmap? =model.generateContent(prompt)
.candidates.first().content.parts.filterIsInstance<ImagePart>()
.firstOrNull()?.image
}

Java

// Provide a text prompt instructing the model to generate an image
Contentprompt=newContent.Builder()
.addText("Generate an image of the Eiffel Tower with fireworks in the background.")
.build();
// To generate an image, call `generateContent` with the text input
ListenableFuture<GenerateContentResponse>response=model.generateContent(prompt);
Futures.addCallback(response,newFutureCallback<GenerateContentResponse>(){
@Override
publicvoidonSuccess(GenerateContentResponseresult){
// iterate over all the parts in the first candidate in the result object
for(Partpart:result.getCandidates().get(0).getContent().getParts()){
if(partinstanceofImagePart){
ImagePartimagePart=(ImagePart)part;
// The returned image as a bitmap
BitmapgeneratedImageAsBitmap=imagePart.getImage();
break;
}
}
}
@Override
publicvoidonFailure(Throwablet){
t.printStackTrace();
}
},executor);

Edit images (text and image input)

You can ask a Gemini model to edit existing images by providing both text and one or more images in your prompt:

Kotlin

scope.launch{
// Provide a text prompt instructing the model to edit the image
valprompt=content{
image(bitmap)
text("Edit this image to make it look like a cartoon")
}
// To edit the image, call `generateContent` with the prompt (image and text input)
valgeneratedImageAsBitmap:Bitmap? =model.generateContent(prompt)
.candidates.first().content.parts.filterIsInstance<ImagePart>().firstOrNull()?.image
// Handle the generated text and image
}

Java

// Provide an image for the model to edit
Bitmapbitmap=BitmapFactory.decodeResource(resources,R.drawable.scones);
// Provide a text prompt instructing the model to edit the image
Contentpromptcontent=newContent.Builder()
.addImage(bitmap)
.addText("Edit this image to make it look like a cartoon")
.build();
// To edit the image, call `generateContent` with the prompt (image and text input)
ListenableFuture<GenerateContentResponse>response=model.generateContent(promptcontent);
Futures.addCallback(response,newFutureCallback<GenerateContentResponse>(){
@Override
publicvoidonSuccess(GenerateContentResponseresult){
// iterate over all the parts in the first candidate in the result object
for(Partpart:result.getCandidates().get(0).getContent().getParts()){
if(partinstanceofImagePart){
ImagePartimagePart=(ImagePart)part;
BitmapgeneratedImageAsBitmap=imagePart.getImage();
break;
}
}
}
@Override
publicvoidonFailure(Throwablet){
t.printStackTrace();
}
},executor);

Iterate and edit images through multi-turn chat

For a conversational approach to image editing, you can use multi-turn chat. This allows for follow-up requests to refine edits without needing to re-send the original image.

First, initialize a chat with startChat(), optionally providing a message history. Then, use sendMessage() for subsequent messages:

Kotlin

scope.launch{
// Create the initial prompt instructing the model to edit the image
valprompt=content{
image(bitmap)
text("Edit this image to make it look like a cartoon")
}
// Initialize the chat
valchat=model.startChat()
// To generate an initial response, send a user message with the image and text prompt
varresponse=chat.sendMessage(prompt)
// Inspect the returned image
vargeneratedImageAsBitmap:Bitmap? =response
.candidates.first().content.parts.filterIsInstance<ImagePart>().firstOrNull()?.image
// Follow up requests do not need to specify the image again
response=chat.sendMessage("But make it old-school line drawing style")
generatedImageAsBitmap=response
.candidates.first().content.parts.filterIsInstance<ImagePart>().firstOrNull()?.image
}

Java

// Provide an image for the model to edit
Bitmapbitmap=BitmapFactory.decodeResource(resources,R.drawable.scones);
// Initialize the chat
ChatFutureschat=model.startChat();
// Create the initial prompt instructing the model to edit the image
Contentprompt=newContent.Builder()
.setRole("user")
.addImage(bitmap)
.addText("Edit this image to make it look like a cartoon")
.build();
// To generate an initial response, send a user message with the image and text prompt
ListenableFuture<GenerateContentResponse>response=chat.sendMessage(prompt);
// Extract the image from the initial response
ListenableFuture<Bitmap>initialRequest=Futures.transform(response,
result->{
for(Partpart:result.getCandidates().get(0).getContent().getParts()){
if(partinstanceofImagePart){
ImagePartimagePart=(ImagePart)part;
returnimagePart.getImage();
}
}
returnnull;
},executor);
// Follow up requests do not need to specify the image again
ListenableFuture<GenerateContentResponse>modelResponseFuture=Futures.transformAsync(
initialRequest,
generatedImage->{
ContentfollowUpPrompt=newContent.Builder()
.addText("But make it old-school line drawing style")
.build();
returnchat.sendMessage(followUpPrompt);
},executor);
// Add a final callback to check the reworked image
Futures.addCallback(modelResponseFuture,newFutureCallback<GenerateContentResponse>(){
@Override
publicvoidonSuccess(GenerateContentResponseresult){
for(Partpart:result.getCandidates().get(0).getContent().getParts()){
if(partinstanceofImagePart){
ImagePartimagePart=(ImagePart)part;
BitmapgeneratedImageAsBitmap=imagePart.getImage();
break;
}
}
}
@Override
publicvoidonFailure(Throwablet){
t.printStackTrace();
}
},executor);

Considerations and limitations

Note the following considerations and limitations:

  • Output Format: Images are generated as PNGs with a maximum dimension of 1024 px.
  • Input Types: The model doesn't support audio or video inputs for image generation.
  • Language Support: For best performance, use the following languages: English (en), Mexican Spanish (es-mx), Japanese (ja-jp), Simplified Chinese (zh-cn), and Hindi (hi-in).
  • Generation Issues:
    • Image generation may not always trigger, sometimes resulting in text-only output. Try asking for image outputs explicitly (for example, "generate an image", "provide images as you go along", "update the image").
    • The model may stop generating partway through. Try again or try a different prompt.
    • The model may generate text as an image. Try asking for text outputs explicitly (for example, "generate narrative text along with illustrations").

See the Firebase documentation for more details.

Next steps

After setting up your app, consider the following next steps:

Content and code samples on this page are subject to the licenses described in the Content License. Java and OpenJDK are trademarks or registered trademarks of Oracle and/or its affiliates.

Last updated 2026年01月07日 UTC.