To avoid service disruption, update to a newer model (for example,
gemini-2.5-flash-lite). Learn more.
Build multi-turn conversations (chat) using the Gemini API
Stay organized with collections
Save and categorize content based on your preferences.
Using the Gemini API, you can build freeform conversations across
multiple turns. The Firebase AI Logic SDK simplifies the process by managing
the state of the conversation, so unlike with generateContent()
(or generateContentStream()), you don't have to store the conversation history
yourself.
Jump to code for text-only chat Jump to code for iterative image editing Jump to code for streamed responses
Before you begin
Click your Gemini API provider to view provider-specific content and code on this page.
If you haven't already, complete the
getting started guide, which describes how to
set up your Firebase project, connect your app to Firebase, add the SDK,
initialize the backend service for your chosen Gemini API provider, and
create a GenerativeModel instance.
For testing and iterating on your prompts, we recommend using Google AI Studio.
Build a text-only chat experience
In that section, you'll also click a button for your chosen Gemini API provider so that you see provider-specific content on this page.
To build a multi-turn conversation (like chat), start off by initializing the
chat by calling startChat(). Then use
sendMessage() to send a new user message, which
will also append the message and the response to the chat history.
There are two possible options for role associated with the content in a
conversation:
user: the role which provides the prompts. This value is the default for calls tosendMessage(), and the function throws an exception if a different role is passed.model: the role which provides the responses. This role can be used when callingstartChat()with existinghistory.
Swift
You can call
startChat()
and
sendMessage()
to send a new user message:
importFirebaseAILogic
// Initialize the Gemini Developer API backend service
letai=FirebaseAI.firebaseAI(backend:.googleAI())
// Create a `GenerativeModel` instance with a model that supports your use case
letmodel=ai.generativeModel(modelName:"gemini-2.5-flash")
// Optionally specify existing chat history
lethistory=[
ModelContent(role:"user",parts:"Hello, I have 2 dogs in my house."),
ModelContent(role:"model",parts:"Great to meet you. What would you like to know?"),
]
// Initialize the chat with optional chat history
letchat=model.startChat(history:history)
// To generate text output, call sendMessage and pass in the message
letresponse=tryawaitchat.sendMessage("How many paws are in my house?")
print(response.text??"No text in response.")
Kotlin
You can call startChat()
and
sendMessage()
to send a new user message:
// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a model that supports your use case
valmodel=Firebase.ai(backend=GenerativeBackend.googleAI())
.generativeModel("gemini-2.5-flash")
// Initialize the chat
valchat=model.startChat(
history=listOf(
content(role="user"){text("Hello, I have 2 dogs in my house.")},
content(role="model"){text("Great to meet you. What would you like to know?")}
)
)
valresponse=chat.sendMessage("How many paws are in my house?")
print(response.text)
Java
You can call
startChat()
and
sendMessage()
to send a new user message:
ListenableFuture.
// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a model that supports your use case
GenerativeModelai=FirebaseAI.getInstance(GenerativeBackend.googleAI())
.generativeModel("gemini-2.5-flash");
// Use the GenerativeModelFutures Java compatibility layer which offers
// support for ListenableFuture and Publisher APIs
GenerativeModelFuturesmodel=GenerativeModelFutures.from(ai);
// (optional) Create previous chat history for context
Content.BuilderuserContentBuilder=newContent.Builder();
userContentBuilder.setRole("user");
userContentBuilder.addText("Hello, I have 2 dogs in my house.");
ContentuserContent=userContentBuilder.build();
Content.BuildermodelContentBuilder=newContent.Builder();
modelContentBuilder.setRole("model");
modelContentBuilder.addText("Great to meet you. What would you like to know?");
ContentmodelContent=userContentBuilder.build();
List<Content>history=Arrays.asList(userContent,modelContent);
// Initialize the chat
ChatFutureschat=model.startChat(history);
// Create a new user message
Content.BuildermessageBuilder=newContent.Builder();
messageBuilder.setRole("user");
messageBuilder.addText("How many paws are in my house?");
Contentmessage=messageBuilder.build();
// Send the message
ListenableFuture<GenerateContentResponse>response=chat.sendMessage(message);
Futures.addCallback(response,newFutureCallback<GenerateContentResponse>(){
@Override
publicvoidonSuccess(GenerateContentResponseresult){
StringresultText=result.getText();
System.out.println(resultText);
}
@Override
publicvoidonFailure(Throwablet){
t.printStackTrace();
}
},executor);
Web
You can call
startChat()
and
sendMessage()
to send a new user message:
import{initializeApp}from"firebase/app";
import{getAI,getGenerativeModel,GoogleAIBackend}from"firebase/ai";
// TODO(developer) Replace the following with your app's Firebase configuration
// See: https://firebase.google.com/docs/web/learn-more#config-object
constfirebaseConfig={
// ...
};
// Initialize FirebaseApp
constfirebaseApp=initializeApp(firebaseConfig);
// Initialize the Gemini Developer API backend service
constai=getAI(firebaseApp,{backend:newGoogleAIBackend()});
// Create a `GenerativeModel` instance with a model that supports your use case
constmodel=getGenerativeModel(ai,{model:"gemini-2.5-flash"});
asyncfunctionrun(){
constchat=model.startChat({
history:[
{
role:"user",
parts:[{text:"Hello, I have 2 dogs in my house."}],
},
{
role:"model",
parts:[{text:"Great to meet you. What would you like to know?"}],
},
],
generationConfig:{
maxOutputTokens:100,
},
});
constmsg="How many paws are in my house?";
constresult=awaitchat.sendMessage(msg);
consttext=result.response.text();
console.log(text);
}
run();
Dart
You can call
startChat()
and
sendMessage()
to send a new user message:
import'package:firebase_ai/firebase_ai.dart';
import'package:firebase_core/firebase_core.dart';
import'firebase_options.dart';
// Initialize FirebaseApp
awaitFirebase.initializeApp(
options:DefaultFirebaseOptions.currentPlatform,
);
// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a model that supports your use case
finalmodel=
FirebaseAI.googleAI().generativeModel(model:'gemini-2.5-flash');
finalchat=model.startChat();
// Provide a prompt that contains text
finalprompt=[Content.text('Write a story about a magic backpack.')];
finalresponse=awaitchat.sendMessage(prompt);
print(response.text);
Unity
You can call
StartChat()
and
SendMessageAsync()
to send a new user message:
usingFirebase;
usingFirebase.AI;
// Initialize the Gemini Developer API backend service
varai=FirebaseAI.GetInstance(FirebaseAI.Backend.GoogleAI());
// Create a `GenerativeModel` instance with a model that supports your use case
varmodel=ai.GetGenerativeModel(modelName:"gemini-2.5-flash");
// Optionally specify existing chat history
varhistory=new[]{
ModelContent.Text("Hello, I have 2 dogs in my house."),
newModelContent("model",newModelContent.TextPart("Great to meet you. What would you like to know?")),
};
// Initialize the chat with optional chat history
varchat=model.StartChat(history);
// To generate text output, call SendMessageAsync and pass in the message
varresponse=awaitchat.SendMessageAsync("How many paws are in my house?");
UnityEngine.Debug.Log(response.Text??"No text in response.");
Learn how to choose a model appropriate for your use case and app.
Iterate and edit images using multi-turn chat
In that section, you'll also click a button for your chosen Gemini API provider so that you see provider-specific content on this page.
Using multi-turn chat, you can iterate with a Gemini model on the images that it generates or that you supply.
Make sure to create a GenerativeModel instance, include
responseModalities: ["TEXT", "IMAGE"] in your model
configuration, and call startChat() and sendMessage() to send new user
messages.
Swift
importFirebaseAILogic
// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
letgenerativeModel=FirebaseAI.firebaseAI(backend:.googleAI()).generativeModel(
modelName:"gemini-2.5-flash-image",
// Configure the model to respond with text and images (required)
generationConfig:GenerationConfig(responseModalities:[.text,.image])
)
// Initialize the chat
letchat=model.startChat()
guardletimage=UIImage(named:"scones")else{fatalError("Image file not found.")}
// Provide an initial text prompt instructing the model to edit the image
letprompt="Edit this image to make it look like a cartoon"
// To generate an initial response, send a user message with the image and text prompt
letresponse=tryawaitchat.sendMessage(image,prompt)
// Inspect the generated image
guardletinlineDataPart=response.inlineDataParts.firstelse{
fatalError("No image data in response.")
}
guardletuiImage=UIImage(data:inlineDataPart.data)else{
fatalError("Failed to convert data to UIImage.")
}
// Follow up requests do not need to specify the image again
letfollowUpResponse=tryawaitchat.sendMessage("But make it old-school line drawing style")
// Inspect the edited image after the follow up request
guardletfollowUpInlineDataPart=followUpResponse.inlineDataParts.firstelse{
fatalError("No image data in response.")
}
guardletfollowUpUIImage=UIImage(data:followUpInlineDataPart.data)else{
fatalError("Failed to convert data to UIImage.")
}
Kotlin
// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
valmodel=Firebase.ai(backend=GenerativeBackend.googleAI()).generativeModel(
modelName="gemini-2.5-flash-image",
// Configure the model to respond with text and images (required)
generationConfig=generationConfig{
responseModalities=listOf(ResponseModality.TEXT,ResponseModality.IMAGE)}
)
// Provide an image for the model to edit
valbitmap=BitmapFactory.decodeResource(context.resources,R.drawable.scones)
// Create the initial prompt instructing the model to edit the image
valprompt=content{
image(bitmap)
text("Edit this image to make it look like a cartoon")
}
// Initialize the chat
valchat=model.startChat()
// To generate an initial response, send a user message with the image and text prompt
varresponse=chat.sendMessage(prompt)
// Inspect the returned image
vargeneratedImageAsBitmap=response
.candidates.first().content.parts.filterIsInstance<ImagePart>().firstOrNull()?.image
// Follow up requests do not need to specify the image again
response=chat.sendMessage("But make it old-school line drawing style")
generatedImageAsBitmap=response
.candidates.first().content.parts.filterIsInstance<ImagePart>().firstOrNull()?.image
Java
// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
GenerativeModelai=FirebaseAI.getInstance(GenerativeBackend.googleAI()).generativeModel(
"gemini-2.5-flash-image",
// Configure the model to respond with text and images (required)
newGenerationConfig.Builder()
.setResponseModalities(Arrays.asList(ResponseModality.TEXT,ResponseModality.IMAGE))
.build()
);
GenerativeModelFuturesmodel=GenerativeModelFutures.from(ai);
// Provide an image for the model to edit
Bitmapbitmap=BitmapFactory.decodeResource(resources,R.drawable.scones);
// Initialize the chat
ChatFutureschat=model.startChat();
// Create the initial prompt instructing the model to edit the image
Contentprompt=newContent.Builder()
.setRole("user")
.addImage(bitmap)
.addText("Edit this image to make it look like a cartoon")
.build();
// To generate an initial response, send a user message with the image and text prompt
ListenableFuture<GenerateContentResponse>response=chat.sendMessage(prompt);
// Extract the image from the initial response
ListenableFuture<@NullableBitmap>initialRequest=Futures.transform(response,result->{
for(Partpart:result.getCandidates().get(0).getContent().getParts()){
if(partinstanceofImagePart){
ImagePartimagePart=(ImagePart)part;
returnimagePart.getImage();
}
}
returnnull;
},executor);
// Follow up requests do not need to specify the image again
ListenableFuture<GenerateContentResponse>modelResponseFuture=Futures.transformAsync(
initialRequest,
generatedImage->{
ContentfollowUpPrompt=newContent.Builder()
.addText("But make it old-school line drawing style")
.build();
returnchat.sendMessage(followUpPrompt);
},
executor);
// Add a final callback to check the reworked image
Futures.addCallback(modelResponseFuture,newFutureCallback<GenerateContentResponse>(){
@Override
publicvoidonSuccess(GenerateContentResponseresult){
for(Partpart:result.getCandidates().get(0).getContent().getParts()){
if(partinstanceofImagePart){
ImagePartimagePart=(ImagePart)part;
BitmapgeneratedImageAsBitmap=imagePart.getImage();
break;
}
}
}
@Override
publicvoidonFailure(Throwablet){
t.printStackTrace();
}
},executor);
Web
import{initializeApp}from"firebase/app";
import{getAI,getGenerativeModel,GoogleAIBackend,ResponseModality}from"firebase/ai";
// TODO(developer) Replace the following with your app's Firebase configuration
// See: https://firebase.google.com/docs/web/learn-more#config-object
constfirebaseConfig={
// ...
};
// Initialize FirebaseApp
constfirebaseApp=initializeApp(firebaseConfig);
// Initialize the Gemini Developer API backend service
constai=getAI(firebaseApp,{backend:newGoogleAIBackend()});
// Create a `GenerativeModel` instance with a model that supports your use case
constmodel=getGenerativeModel(ai,{
model:"gemini-2.5-flash-image",
// Configure the model to respond with text and images (required)
generationConfig:{
responseModalities:[ResponseModality.TEXT,ResponseModality.IMAGE],
},
});
// Prepare an image for the model to edit
asyncfunctionfileToGenerativePart(file){
constbase64EncodedDataPromise=newPromise((resolve)=>{
constreader=newFileReader();
reader.onloadend=()=>resolve(reader.result.split(',')[1]);
reader.readAsDataURL(file);
});
return{
inlineData:{data:awaitbase64EncodedDataPromise,mimeType:file.type},
};
}
constfileInputEl=document.querySelector("input[type=file]");
constimagePart=awaitfileToGenerativePart(fileInputEl.files[0]);
// Provide an initial text prompt instructing the model to edit the image
constprompt="Edit this image to make it look like a cartoon";
// Initialize the chat
constchat=model.startChat();
// To generate an initial response, send a user message with the image and text prompt
constresult=awaitchat.sendMessage([prompt,imagePart]);
// Request and inspect the generated image
try{
constinlineDataParts=result.response.inlineDataParts();
if(inlineDataParts?.[0]){
// Inspect the generated image
constimage=inlineDataParts[0].inlineData;
console.log(image.mimeType,image.data);
}
}catch(err){
console.error('Prompt or candidate was blocked:',err);
}
// Follow up requests do not need to specify the image again
constfollowUpResult=awaitchat.sendMessage("But make it old-school line drawing style");
// Request and inspect the returned image
try{
constfollowUpInlineDataParts=followUpResult.response.inlineDataParts();
if(followUpInlineDataParts?.[0]){
// Inspect the generated image
constfollowUpImage=followUpInlineDataParts[0].inlineData;
console.log(followUpImage.mimeType,followUpImage.data);
}
}catch(err){
console.error('Prompt or candidate was blocked:',err);
}
Dart
import'package:firebase_ai/firebase_ai.dart';
import'package:firebase_core/firebase_core.dart';
import'firebase_options.dart';
awaitFirebase.initializeApp(
options:DefaultFirebaseOptions.currentPlatform,
);
// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
finalmodel=FirebaseAI.googleAI().generativeModel(
model:'gemini-2.5-flash-image',
// Configure the model to respond with text and images (required)
generationConfig:GenerationConfig(responseModalities:[ResponseModalities.text,ResponseModalities.image]),
);
// Prepare an image for the model to edit
finalimage=awaitFile('scones.jpg').readAsBytes();
finalimagePart=InlineDataPart('image/jpeg',image);
// Provide an initial text prompt instructing the model to edit the image
finalprompt=TextPart("Edit this image to make it look like a cartoon");
// Initialize the chat
finalchat=model.startChat();
// To generate an initial response, send a user message with the image and text prompt
finalresponse=awaitchat.sendMessage([
Content.multi([prompt,imagePart])
]);
// Inspect the returned image
if(response.inlineDataParts.isNotEmpty){
finalimageBytes=response.inlineDataParts[0].bytes;
// Process the image
}else{
// Handle the case where no images were generated
print('Error: No images were generated.');
}
// Follow up requests do not need to specify the image again
finalfollowUpResponse=awaitchat.sendMessage([
Content.text("But make it old-school line drawing style")
]);
// Inspect the returned image
if(followUpResponse.inlineDataParts.isNotEmpty){
finalfollowUpImageBytes=response.inlineDataParts[0].bytes;
// Process the image
}else{
// Handle the case where no images were generated
print('Error: No images were generated.');
}
Unity
usingFirebase;
usingFirebase.AI;
// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
varmodel=FirebaseAI.GetInstance(FirebaseAI.Backend.GoogleAI()).GetGenerativeModel(
modelName:"gemini-2.5-flash-image",
// Configure the model to respond with text and images (required)
generationConfig:newGenerationConfig(
responseModalities:new[]{ResponseModality.Text,ResponseModality.Image})
);
// Prepare an image for the model to edit
varimageFile=System.IO.File.ReadAllBytes(System.IO.Path.Combine(
UnityEngine.Application.streamingAssetsPath,"scones.jpg"));
varimage=ModelContent.InlineData("image/jpeg",imageFile);
// Provide an initial text prompt instructing the model to edit the image
varprompt=ModelContent.Text("Edit this image to make it look like a cartoon.");
// Initialize the chat
varchat=model.StartChat();
// To generate an initial response, send a user message with the image and text prompt
varresponse=awaitchat.SendMessageAsync(new[]{prompt,image});
// Inspect the returned image
varimageParts=response.Candidates.First().Content.Parts
.OfType<ModelContent.InlineDataPart>()
.Where(part=>part.MimeType=="image/png");
// Load the image into a Unity Texture2D object
UnityEngine.Texture2Dtexture2D=new(2,2);
if(texture2D.LoadImage(imageParts.First().Data.ToArray())){
// Do something with the image
}
// Follow up requests do not need to specify the image again
varfollowUpResponse=awaitchat.SendMessageAsync("But make it old-school line drawing style");
// Inspect the returned image
varfollowUpImageParts=followUpResponse.Candidates.First().Content.Parts
.OfType<ModelContent.InlineDataPart>()
.Where(part=>part.MimeType=="image/png");
// Load the image into a Unity Texture2D object
UnityEngine.Texture2DfollowUpTexture2D=new(2,2);
if(followUpTexture2D.LoadImage(followUpImageParts.First().Data.ToArray())){
// Do something with the image
}
Stream the response
In that section, you'll also click a button for your chosen Gemini API provider so that you see provider-specific content on this page.
You can achieve faster interactions by not waiting for the entire result from
the model generation, and instead use streaming to handle partial results.
To stream the response, call sendMessageStream().
View example: Stream chat responses
Swift
You can call
startChat()
and
sendMessageStream()
to stream responses from the model:
importFirebaseAILogic
// Initialize the Gemini Developer API backend service
letai=FirebaseAI.firebaseAI(backend:.googleAI())
// Create a `GenerativeModel` instance with a model that supports your use case
letmodel=ai.generativeModel(modelName:"gemini-2.5-flash")
// Optionally specify existing chat history
lethistory=[
ModelContent(role:"user",parts:"Hello, I have 2 dogs in my house."),
ModelContent(role:"model",parts:"Great to meet you. What would you like to know?"),
]
// Initialize the chat with optional chat history
letchat=model.startChat(history:history)
// To stream generated text output, call sendMessageStream and pass in the message
letcontentStream=trychat.sendMessageStream("How many paws are in my house?")
fortryawaitchunkincontentStream{
iflettext=chunk.text{
print(text)
}
}
Kotlin
You can call
startChat()
and
sendMessageStream()
to stream responses from the model:
// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a model that supports your use case
valmodel=Firebase.ai(backend=GenerativeBackend.googleAI())
.generativeModel("gemini-2.5-flash")
// Initialize the chat
valchat=model.startChat(
history=listOf(
content(role="user"){text("Hello, I have 2 dogs in my house.")},
content(role="model"){text("Great to meet you. What would you like to know?")}
)
)
chat.sendMessageStream("How many paws are in my house?").collect{chunk->
print(chunk.text)
}
Java
You can call
startChat()
and
sendMessageStream()
to stream responses from the model:
Publisher type from the Reactive Streams library.
// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a model that supports your use case
GenerativeModelai=FirebaseAI.getInstance(GenerativeBackend.googleAI())
.generativeModel("gemini-2.5-flash");
// Use the GenerativeModelFutures Java compatibility layer which offers
// support for ListenableFuture and Publisher APIs
GenerativeModelFuturesmodel=GenerativeModelFutures.from(ai);
// (optional) Create previous chat history for context
Content.BuilderuserContentBuilder=newContent.Builder();
userContentBuilder.setRole("user");
userContentBuilder.addText("Hello, I have 2 dogs in my house.");
ContentuserContent=userContentBuilder.build();
Content.BuildermodelContentBuilder=newContent.Builder();
modelContentBuilder.setRole("model");
modelContentBuilder.addText("Great to meet you. What would you like to know?");
ContentmodelContent=userContentBuilder.build();
List<Content>history=Arrays.asList(userContent,modelContent);
// Initialize the chat
ChatFutureschat=model.startChat(history);
// Create a new user message
Content.BuildermessageBuilder=newContent.Builder();
messageBuilder.setRole("user");
messageBuilder.addText("How many paws are in my house?");
Contentmessage=messageBuilder.build();
// Send the message
Publisher<GenerateContentResponse>streamingResponse=
chat.sendMessageStream(message);
finalString[]fullResponse={""};
streamingResponse.subscribe(newSubscriber<GenerateContentResponse>(){
@Override
publicvoidonNext(GenerateContentResponsegenerateContentResponse){
Stringchunk=generateContentResponse.getText();
fullResponse[0]+=chunk;
}
@Override
publicvoidonComplete(){
System.out.println(fullResponse[0]);
}
// ... other methods omitted for brevity
});
Web
You can call
startChat()
and
sendMessageStream()
to stream responses from the model:
import{initializeApp}from"firebase/app";
import{getAI,getGenerativeModel,GoogleAIBackend}from"firebase/ai";
// TODO(developer) Replace the following with your app's Firebase configuration
// See: https://firebase.google.com/docs/web/learn-more#config-object
constfirebaseConfig={
// ...
};
// Initialize FirebaseApp
constfirebaseApp=initializeApp(firebaseConfig);
// Initialize the Gemini Developer API backend service
constai=getAI(firebaseApp,{backend:newGoogleAIBackend()});
// Create a `GenerativeModel` instance with a model that supports your use case
constmodel=getGenerativeModel(ai,{model:"gemini-2.5-flash"});
asyncfunctionrun(){
constchat=model.startChat({
history:[
{
role:"user",
parts:[{text:"Hello, I have 2 dogs in my house."}],
},
{
role:"model",
parts:[{text:"Great to meet you. What would you like to know?"}],
},
],
generationConfig:{
maxOutputTokens:100,
},
});
constmsg="How many paws are in my house?";
constresult=awaitchat.sendMessageStream(msg);
lettext='';
forawait(constchunkofresult.stream){
constchunkText=chunk.text();
console.log(chunkText);
text+=chunkText;
}
}
run();
Dart
You can call
startChat()
and
sendMessageStream()
to stream responses from the model:
import'package:firebase_ai/firebase_ai.dart';
import'package:firebase_core/firebase_core.dart';
import'firebase_options.dart';
// Initialize FirebaseApp
awaitFirebase.initializeApp(
options:DefaultFirebaseOptions.currentPlatform,
);
// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a model that supports your use case
finalmodel=
FirebaseAI.googleAI().generativeModel(model:'gemini-2.5-flash');
finalchat=model.startChat();
// Provide a prompt that contains text
finalprompt=[Content.text('Write a story about a magic backpack.')];
finalresponse=awaitchat.sendMessageStream(prompt);
awaitfor(finalchunkinresponse){
print(chunk.text);
}
Unity
You can call
StartChat()
and
SendMessageStreamAsync()
to stream responses from the model:
usingFirebase;
usingFirebase.AI;
// Initialize the Gemini Developer API backend service
varai=FirebaseAI.GetInstance(FirebaseAI.Backend.GoogleAI());
// Create a `GenerativeModel` instance with a model that supports your use case
varmodel=ai.GetGenerativeModel(modelName:"gemini-2.5-flash");
// Optionally specify existing chat history
varhistory=new[]{
ModelContent.Text("Hello, I have 2 dogs in my house."),
newModelContent("model",newModelContent.TextPart("Great to meet you. What would you like to know?")),
};
// Initialize the chat with optional chat history
varchat=model.StartChat(history);
// To stream generated text output, call SendMessageStreamAsync and pass in the message
varresponseStream=chat.SendMessageStreamAsync("How many paws are in my house?");
awaitforeach(varresponseinresponseStream){
if(!string.IsNullOrWhiteSpace(response.Text)){
UnityEngine.Debug.Log(response.Text);
}
}
Learn how to choose a model appropriate for your use case and app.
What else can you do?
- Learn how to count tokens before sending long prompts to the model.
- Set up Cloud Storage for Firebase so that you can include large files in your multimodal requests and have a more managed solution for providing files in prompts. Files can include images, PDFs, video, and audio.
-
Start thinking about preparing for production (see the
production checklist),
including:
- Setting up Firebase App Check to protect the Gemini API from abuse by unauthorized clients.
- Integrating Firebase Remote Config to update values in your app (like model name) without releasing a new app version.
Try out other capabilities
- Generate text from text-only prompts.
- Generate text by prompting with various file types, like images, PDFs, video, and audio.
- Generate structured output (like JSON) from both text and multimodal prompts.
- Generate images from text prompts (Gemini or Imagen).
- Use tools (like function calling and grounding with Google Search) to connect a Gemini model to other parts of your app and external systems and information.
Learn how to control content generation
- Understand prompt design, including best practices, strategies, and example prompts.
- Configure model parameters like temperature and maximum output tokens (for Gemini) or aspect ratio and person generation (for Imagen).
- Use safety settings to adjust the likelihood of getting responses that may be considered harmful.
Learn more about the supported models
Learn about the models available for various use cases and their quotas and pricing.Give feedback about your experience with Firebase AI Logic