Migrate from the legacy custom model API
Stay organized with collections
Save and categorize content based on your preferences.
Version 22.0.2 of the firebase-ml-model-interpreter
library introduces a new
getLatestModelFile()
method, which gets the location on the device of custom
models. You can use this method to directly instantiate a TensorFlow Lite
Interpreter
object, which you can use instead of the
FirebaseModelInterpreter
wrapper.
Going forward, this is the preferred approach. Because the TensorFlow Lite interpreter version is no longer coupled with the Firebase library version, you have more flexibility to upgrade to new versions of TensorFlow Lite when you want, or more easily use custom TensorFlow Lite builds.
This page shows how you can migrate from using FirebaseModelInterpreter
to the
TensorFlow Lite Interpreter
.
1. Update project dependencies
Update your project's dependencies to include version 22.0.2 of the
firebase-ml-model-interpreter
library (or newer) and the tensorflow-lite
library:
Before
implementation("com.google.firebase:firebase-ml-model-interpreter:22.0.1")
After
implementation("com.google.firebase:firebase-ml-model-interpreter:22.0.2")
implementation("org.tensorflow:tensorflow-lite:2.0.0")
2. Create a TensorFlow Lite interpreter instead of a FirebaseModelInterpreter
Instead of creating a FirebaseModelInterpreter
, get the model's location on
device with getLatestModelFile()
and use it to create a TensorFlow Lite
Interpreter
.
Before
Kotlin
valremoteModel=FirebaseCustomRemoteModel.Builder("your_model").build()
valoptions=FirebaseModelInterpreterOptions.Builder(remoteModel).build()
valinterpreter=FirebaseModelInterpreter.getInstance(options)
Java
FirebaseCustomRemoteModelremoteModel=
newFirebaseCustomRemoteModel.Builder("your_model").build();
FirebaseModelInterpreterOptionsoptions=
newFirebaseModelInterpreterOptions.Builder(remoteModel).build();
FirebaseModelInterpreterinterpreter=FirebaseModelInterpreter.getInstance(options);
After
Kotlin
valremoteModel=FirebaseCustomRemoteModel.Builder("your_model").build()
FirebaseModelManager.getInstance().getLatestModelFile(remoteModel)
.addOnCompleteListener{task->
valmodelFile=task.getResult()
if(modelFile!=null){
// Instantiate an org.tensorflow.lite.Interpreter object.
interpreter=Interpreter(modelFile)
}
}
Java
FirebaseCustomRemoteModelremoteModel=
newFirebaseCustomRemoteModel.Builder("your_model").build();
FirebaseModelManager.getInstance().getLatestModelFile(remoteModel)
.addOnCompleteListener(newOnCompleteListener<File>(){
@Override
publicvoidonComplete(@NonNullTask<File>task){
FilemodelFile=task.getResult();
if(modelFile!=null){
// Instantiate an org.tensorflow.lite.Interpreter object.
Interpreterinterpreter=newInterpreter(modelFile);
}
}
});
3. Update input and output preparation code
With FirebaseModelInterpreter
, you specify the model's input and output shapes
by passing a FirebaseModelInputOutputOptions
object to the interpreter when
you run it.
For the TensorFlow Lite interpreter, you instead allocate ByteBuffer
objects
with the right size for your model's input and output.
For example, if your model has an input shape of [1 224 224 3] float
values
and an output shape of [1 1000] float
values, make these changes:
Before
Kotlin
valinputOutputOptions=FirebaseModelInputOutputOptions.Builder()
.setInputFormat(0,FirebaseModelDataType.FLOAT32,intArrayOf(1,224,224,3))
.setOutputFormat(0,FirebaseModelDataType.FLOAT32,intArrayOf(1,1000))
.build()
valinput=ByteBuffer.allocateDirect(224*224*3*4).order(ByteOrder.nativeOrder())
// Then populate with input data.
valinputs=FirebaseModelInputs.Builder()
.add(input)
.build()
interpreter.run(inputs,inputOutputOptions)
.addOnSuccessListener{outputs->
// ...
}
.addOnFailureListener{
// Task failed with an exception.
// ...
}
Java
FirebaseModelInputOutputOptionsinputOutputOptions=
newFirebaseModelInputOutputOptions.Builder()
.setInputFormat(0,FirebaseModelDataType.FLOAT32,newint[]{1,224,224,3})
.setOutputFormat(0,FirebaseModelDataType.FLOAT32,newint[]{1,1000})
.build();
float[][][][]input=newfloat[1][224][224][3];
// Then populate with input data.
FirebaseModelInputsinputs=newFirebaseModelInputs.Builder()
.add(input)
.build();
interpreter.run(inputs,inputOutputOptions)
.addOnSuccessListener(
newOnSuccessListener<FirebaseModelOutputs>(){
@Override
publicvoidonSuccess(FirebaseModelOutputsresult){
// ...
}
})
.addOnFailureListener(
newOnFailureListener(){
@Override
publicvoidonFailure(@NonNullExceptione){
// Task failed with an exception
// ...
}
});
After
Kotlin
valinBufferSize=1*224*224*3*java.lang.Float.SIZE/java.lang.Byte.SIZE
valinputBuffer=ByteBuffer.allocateDirect(inBufferSize).order(ByteOrder.nativeOrder())
// Then populate with input data.
valoutBufferSize=1*1000*java.lang.Float.SIZE/java.lang.Byte.SIZE
valoutputBuffer=ByteBuffer.allocateDirect(outBufferSize).order(ByteOrder.nativeOrder())
interpreter.run(inputBuffer,outputBuffer)
Java
intinBufferSize=1*224*224*3*java.lang.Float.SIZE/java.lang.Byte.SIZE;
ByteBufferinputBuffer=
ByteBuffer.allocateDirect(inBufferSize).order(ByteOrder.nativeOrder());
// Then populate with input data.
intoutBufferSize=1*1000*java.lang.Float.SIZE/java.lang.Byte.SIZE;
ByteBufferoutputBuffer=
ByteBuffer.allocateDirect(outBufferSize).order(ByteOrder.nativeOrder());
interpreter.run(inputBuffer,outputBuffer);
4. Update output handling code
Finally, instead of getting the model's output with the FirebaseModelOutputs
object's getOutput()
method, convert the ByteBuffer
output to whatever
structure is convenient for your use case.
For example, if you're doing classification, you might make changes like the following:
Before
Kotlin
valoutput=result.getOutput(0)
valprobabilities=output[0]
try{
valreader=BufferedReader(InputStreamReader(assets.open("custom_labels.txt")))
for(probabilityinprobabilities){
vallabel:String=reader.readLine()
println("$label: $probability")
}
}catch(e:IOException){
// File not found?
}
Java
float[][]output=result.getOutput(0);
float[]probabilities=output[0];
try{
BufferedReaderreader=newBufferedReader(
newInputStreamReader(getAssets().open("custom_labels.txt")));
for(floatprobability:probabilities){
Stringlabel=reader.readLine();
Log.i(TAG,String.format("%s: %1.4f",label,probability));
}
}catch(IOExceptione){
// File not found?
}
After
Kotlin
modelOutput.rewind()
valprobabilities=modelOutput.asFloatBuffer()
try{
valreader=BufferedReader(
InputStreamReader(assets.open("custom_labels.txt")))
for(iinprobabilities.capacity()){
vallabel:String=reader.readLine()
valprobability=probabilities.get(i)
println("$label: $probability")
}
}catch(e:IOException){
// File not found?
}
Java
modelOutput.rewind();
FloatBufferprobabilities=modelOutput.asFloatBuffer();
try{
BufferedReaderreader=newBufferedReader(
newInputStreamReader(getAssets().open("custom_labels.txt")));
for(inti=0;i < probabilities.capacity();i++){
Stringlabel=reader.readLine();
floatprobability=probabilities.get(i);
Log.i(TAG,String.format("%s: %1.4f",label,probability));
}
}catch(IOExceptione){
// File not found?
}