To avoid service disruption, update to a newer model (for example,
gemini-2.5-flash-lite
). Learn more.
Insert objects into images using Imagen
Stay organized with collections
Save and categorize content based on your preferences.
This page describes how to use inpainting using Imagen to insert an object into an image using the Firebase AI Logic SDKs.
Inpainting is a type of mask-based editing. A mask is a digital overlay defining the specific area you want to edit.
How it works: You provide an original image and a corresponding masked image — either auto-generated or provided by you — that defines a mask over an area where you want to add new content. You also provide a text prompt describing what you want to add. The model then generates and adds new content within the masked area.
For example, you can mask a table and prompt the model to add a vase of flowers.
Jump to code for auto-generated mask Jump to code for providing the mask
Before you begin
If you haven't already, complete the
getting started guide, which
describes how to set up your Firebase project, connect your app to Firebase,
add the SDK, initialize the backend service for your chosen API provider, and
create an ImagenModel
instance.
Models that support this capability
Imagen offers image editing through its capability
model:
imagen-3.0-capability-001
Note that for Imagen models, the global
location is
not supported.
Insert objects using an auto-generated mask
The following sample shows how to use inpainting to insert content into an image — using automatic mask generation. You provide the original image and a text prompt, and Imagen automatically detects and creates a mask area to modify the original image.
Swift
Image editing with Imagen models isn't supported for Swift. Check back later this year!
Kotlin
To insert objects with an auto-generated mask, specify
ImagenBackgroundMask
. Use
editImage()
and set the editing config to use ImagenEditMode.INPAINT_INSERTION
.
// Using this SDK to access Imagen models is a Preview release and requires opt-in
@OptIn(PublicPreviewAPI::class)
suspendfuncustomizeImage(){
// Initialize the Vertex AI Gemini API backend service
// Optionally specify the location to access the model (for example, `us-central1`)
valai=Firebase.ai(backend=GenerativeBackend.vertexAI(location="us-central1"))
// Create an `ImagenModel` instance with an Imagen "capability" model
valmodel=ai.imagenModel("imagen-3.0-capability-001")
// This example assumes 'originalImage' is a pre-loaded Bitmap.
// In a real app, this might come from the user's device or a URL.
valoriginalImage:Bitmap=TODO("Load your original image Bitmap here")
// Provide the prompt describing the content to be inserted.
valprompt="a vase of flowers on the table"
// Use the editImage API to insert the new content.
// Pass the original image, the prompt, and an editing configuration.
valeditedImage=model.editImage(
sources=listOf(
ImagenRawImage(originalImage),
ImagenBackgroundMask(),// Use ImagenBackgroundMask() to auto-generate the mask.
),
prompt=prompt,
// Define the editing configuration for inpainting and insertion.
config=ImagenEditingConfig(ImagenEditMode.INPAINT_INSERTION)
)
// Process the resulting 'editedImage' Bitmap, for example, by displaying it in an ImageView.
}
Java
To insert objects with an auto-generated mask, specify
ImagenBackgroundMask
. Use
editImage()
and set the editing config to use ImagenEditMode.INPAINT_INSERTION
.
// Initialize the Vertex AI Gemini API backend service
// Optionally specify the location to access the model (for example, `us-central1`)
// Create an `ImagenModel` instance with an Imagen "capability" model
ImagenModelimagenModel=FirebaseAI.getInstance(GenerativeBackend.vertexAI("us-central1"))
.imagenModel(
/* modelName */"imagen-3.0-capability-001");
ImagenModelFuturesmodel=ImagenModelFutures.from(imagenModel);
// This example assumes 'originalImage' is a pre-loaded Bitmap.
// In a real app, this might come from the user's device or a URL.
BitmaporiginalImage=null;// TODO("Load your image Bitmap here");
// Provide the prompt describing the content to be inserted.
Stringprompt="a vase of flowers on the table";
// Define the list of sources for the editImage call.
// This includes the original image and the auto-generated mask.
ImagenRawImagerawOriginalImage=newImagenRawImage(originalImage);
ImagenBackgroundMaskrawMaskedImage=newImagenBackgroundMask();// Use ImagenBackgroundMask() to auto-generate the mask.
// Define the editing configuration for inpainting and insertion.
ImagenEditingConfigconfig=newImagenEditingConfig.Builder()
.setEditMode(ImagenEditMode.INPAINT_INSERTION)
.build();
// Use the editImage API to insert the new content.
// Pass the original image, the auto-generated masked image, the prompt, and an editing configuration.
Futures.addCallback(model.editImage(Arrays.asList(rawOriginalImage,rawMaskedImage),prompt,config),newFutureCallback<ImagenGenerationResponse>(){
@Override
publicvoidonSuccess(ImagenGenerationResponseresult){
if(result.getImages().isEmpty()){
Log.d("ImageEditor","No images generated");
}
BitmapeditedImage=result.getImages().get(0).asBitmap();
// Process and use the bitmap to display the image in your UI
}
@Override
publicvoidonFailure(Throwablet){
// ...
}
},Executors.newSingleThreadExecutor());
Web
Image editing with Imagen models isn't supported for Web apps. Check back later this year!
Dart
To insert objects with an auto-generated mask, specify
ImagenBackgroundMask
. Use
editImage()
and set the editing config to use ImagenEditMode.inpaintInsertion
.
import'dart:typed_data';
import'package:firebase_ai/firebase_ai.dart';
import'package:firebase_core/firebase_core.dart';
import'firebase_options.dart';
// Initialize FirebaseApp
awaitFirebase.initializeApp(
options:DefaultFirebaseOptions.currentPlatform,
);
// Initialize the Vertex AI Gemini API backend service
// Optionally specify a location to access the model (for example, `us-central1`)
finalai=FirebaseAI.vertexAI(location:'us-central1');
// Create an `ImagenModel` instance with an Imagen "capability" model
finalmodel=ai.imagenModel(model:'imagen-3.0-capability-001');
// This example assumes 'originalImage' is a pre-loaded Uint8List.
// In a real app, this might come from the user's device or a URL.
finalUint8ListoriginalImage=Uint8List(0);// TODO: Load your original image data here.
// Provide the prompt describing the content to be inserted.
finalprompt='a vase of flowers on the table';
try{
// Use the editImage API to insert the new content.
// Pass the original image, the prompt, and an editing configuration.
finalresponse=awaitmodel.editImage(
sources:[
ImagenRawImage(originalImage),
ImagenBackgroundMask(),// Use ImagenBackgroundMask() to auto-generate the mask.
],
prompt:prompt,
// Define the editing configuration for inpainting and insertion.
config:constImagenEditingConfig(
editMode:ImagenEditMode.inpaintInsertion,
),
);
// Process the result.
if(response.images.isNotEmpty){
finaleditedImage=response.images.first.bytes;
// Use the editedImage (a Uint8List) to display the image, save it, etc.
print('Image successfully generated!');
}else{
// Handle the case where no images were generated.
print('Error: No images were generated.');
}
}catch(e){
// Handle any potential errors during the API call.
print('An error occurred: $e');
}
Unity
Image editing with Imagen models isn't supported for Unity. Check back later this year!
Insert objects using a provided mask
The following sample shows how to use inpainting to insert content into an image — using a mask defined in an image that you provide. You provide the original image, a text prompt, and the masked image.
Swift
Image editing with Imagen models isn't supported for Swift. Check back later this year!
Kotlin
To insert objects and provide your own masked image, specify
ImagenRawMask
with the masked image. Use
editImage()
and set the editing config to use ImagenEditMode.INPAINT_INSERTION
.
// Using this SDK to access Imagen models is a Preview release and requires opt-in
@OptIn(PublicPreviewAPI::class)
suspendfuncustomizeImage(){
// Initialize the Vertex AI Gemini API backend service
// Optionally specify the location to access the model (for example, `us-central1`)
valai=Firebase.ai(backend=GenerativeBackend.vertexAI(location="us-central1"))
// Create an `ImagenModel` instance with an Imagen "capability" model
valmodel=ai.imagenModel("imagen-3.0-capability-001")
// This example assumes 'originalImage' is a pre-loaded Bitmap.
// In a real app, this might come from the user's device or a URL.
valoriginalImage:Bitmap=TODO("Load your original image Bitmap here")
// This example assumes 'maskImage' is a pre-loaded Bitmap that contains the masked area.
// In a real app, this might come from the user's device or a URL.
valmaskImage:Bitmap=TODO("Load your masked image Bitmap here")
// Provide the prompt describing the content to be inserted.
valprompt="a vase of flowers on the table"
// Use the editImage API to insert the new content.
// Pass the original image, the masked image, the prompt, and an editing configuration.
valeditedImage=model.editImage(
referenceImages=listOf(
ImagenRawImage(originalImage.toImagenInlineImage()),
ImagenRawMask(maskImage.toImagenInlineImage()),// Use ImagenRawMask() to provide your own masked image.
),
prompt=prompt,
// Define the editing configuration for inpainting and insertion.
config=ImagenEditingConfig(ImagenEditMode.INPAINT_INSERTION)
)
// Process the resulting 'editedImage' Bitmap, for example, by displaying it in an ImageView.
}
Java
To insert objects and provide your own masked image, specify
ImagenRawMask
with the masked image. Use
editImage()
and set the editing config to use ImagenEditMode.INPAINT_INSERTION
.
// Initialize the Vertex AI Gemini API backend service
// Optionally specify the location to access the model (for example, `us-central1`)
// Create an `ImagenModel` instance with an Imagen "capability" model
ImagenModelimagenModel=FirebaseAI.getInstance(GenerativeBackend.vertexAI("us-central1"))
.imagenModel(
/* modelName */"imagen-3.0-capability-001");
ImagenModelFuturesmodel=ImagenModelFutures.from(imagenModel);
// This example assumes 'originalImage' is a pre-loaded Bitmap.
// In a real app, this might come from the user's device or a URL.
BitmaporiginalImage=null;// TODO("Load your original image Bitmap here");
// This example assumes 'maskImage' is a pre-loaded Bitmap that contains the masked area.
// In a real app, this might come from the user's device or a URL.
BitmapmaskImage=null;// TODO("Load your masked image Bitmap here");
// Provide the prompt describing the content to be inserted.
Stringprompt="a vase of flowers on the table";
// Define the list of source images for the editImage call.
ImagenRawImagerawOriginalImage=newImagenRawImage(originalImage);
ImagenBackgroundMaskrawMaskedImage=newImagenRawMask(maskImage);// Use ImagenRawMask() to provide your own masked image.
// Define the editing configuration for inpainting and insertion.
ImagenEditingConfigconfig=newImagenEditingConfig.Builder()
.setEditMode(ImagenEditMode.INPAINT_INSERTION)
.build();
// Use the editImage API to insert the new content.
// Pass the original image, the masked image, the prompt, and an editing configuration.
Futures.addCallback(model.editImage(Arrays.asList(rawOriginalImage,rawMaskedImage),prompt,config),newFutureCallback<ImagenGenerationResponse>(){
@Override
publicvoidonSuccess(ImagenGenerationResponseresult){
if(result.getImages().isEmpty()){
Log.d("ImageEditor","No images generated");
}
BitmapeditedImage=result.getImages().get(0).asBitmap();
// Process and use the bitmap to display the image in your UI
}
@Override
publicvoidonFailure(Throwablet){
// ...
}
},Executors.newSingleThreadExecutor());
Web
Image editing with Imagen models isn't supported for Web apps. Check back later this year!
Dart
To insert objects and provide your own masked image, specify
ImagenRawMask
with the masked image. Use
editImage()
and set the editing config to use ImagenEditMode.inpaintInsertion
.
import'dart:typed_data';
import'package:firebase_ai/firebase_ai.dart';
import'package:firebase_core/firebase_core.dart';
import'firebase_options.dart';
// Initialize FirebaseApp
awaitFirebase.initializeApp(
options:DefaultFirebaseOptions.currentPlatform,
);
// Initialize the Vertex AI Gemini API backend service
// Optionally specify a location to access the model (for example, `us-central1`)
finalai=FirebaseAI.vertexAI(location:'us-central1');
// Create an `ImagenModel` instance with an Imagen "capability" model
finalmodel=ai.imagenModel(model:'imagen-3.0-capability-001');
// This example assumes 'originalImage' is a pre-loaded Uint8List.
// In a real app, this might come from the user's device or a URL.
finalUint8ListoriginalImage=Uint8List(0);// TODO: Load your original image data here.
// This example assumes 'maskImage' is a pre-loaded Uint8List that contains the masked area.
// In a real app, this might come from the user's device or a URL.
finalUint8ListmaskImage=Uint8List(0);// TODO: Load your masked image data here.
// Provide the prompt describing the content to be inserted.
finalprompt='a vase of flowers on the table';
try{
// Use the editImage API to insert the new content.
// Pass the original image, the prompt, and an editing configuration.
finalresponse=awaitmodel.editImage(
sources:[
ImagenRawImage(originalImage),
ImagenRawMask(maskImage),// Use ImagenRawMask() to provide your own masked image.
],
prompt:prompt,
// Define the editing configuration for inpainting and insertion.
config:constImagenEditingConfig(
editMode:ImagenEditMode.inpaintInsertion,
),
);
// Process the result.
if(response.images.isNotEmpty){
finaleditedImage=response.images.first.bytes;
// Use the editedImage (a Uint8List) to display the image, save it, etc.
print('Image successfully generated!');
}else{
// Handle the case where no images were generated.
print('Error: No images were generated.');
}
}catch(e){
// Handle any potential errors during the API call.
print('An error occurred: $e');
}
Unity
Image editing with Imagen models isn't supported for Unity. Check back later this year!
Best practices and limitations
We recommend dilating the mask when editing an image. This can help smooth
the borders of an edit and make it seem more convincing. Generally, a dilation
value of 1% or 2% is recommended (0.01
or 0.02
).
Give feedback about your experience with Firebase AI Logic