Migrate from the legacy custom model API

Version 0.20.0 of the Firebase/MLModelInterpreter library introduces a new getLatestModelFilePath() method, which gets the location on the device of custom models. You can use this method to directly instantiate a TensorFlow Lite Interpreter object, which you can use instead of Firebase's ModelInterpreter wrapper.

Going forward, this is the preferred approach. Because the TensorFlow Lite interpreter version is no longer coupled with the Firebase library version, you have more flexibility to upgrade to new versions of TensorFlow Lite when you want, or more easily use custom TensorFlow Lite builds.

This page shows how you can migrate from using ModelInterpreter to the TensorFlow Lite Interpreter.

1. Update project dependencies

Update your project's Podfile to include version 0.20.0 of the Firebase/MLModelInterpreter library (or newer) and the TensorFlow Lite library:

Before

Swift

pod'Firebase/MLModelInterpreter','0.19.0'

Objective-C

pod'Firebase/MLModelInterpreter','0.19.0'

After

Swift

pod'Firebase/MLModelInterpreter','~>0.20.0'
pod'TensorFlowLiteSwift'

Objective-C

pod'Firebase/MLModelInterpreter','~>0.20.0'
pod'TensorFlowLiteObjC'

2. Create a TensorFlow Lite interpreter instead of a Firebase ModelInterpreter

Instead of creating a Firebase ModelInterpreter, get the model's location on device with getLatestModelFilePath() and use it to create a TensorFlow Lite Interpreter.

Before

Swift

letremoteModel=CustomRemoteModel(
name:"your_remote_model"// The name you assigned in the Firebase console.
)
interpreter=ModelInterpreter.modelInterpreter(remoteModel:remoteModel)

Objective-C

// Initialize using the name you assigned in the Firebase console.
FIRCustomRemoteModel*remoteModel=
[[FIRCustomRemoteModelalloc]initWithName:@"your_remote_model"];
interpreter=[FIRModelInterpretermodelInterpreterForRemoteModel:remoteModel];

After

Swift

letremoteModel=CustomRemoteModel(
name:"your_remote_model"// The name you assigned in the Firebase console.
)
ModelManager.modelManager().getLatestModelFilePath(remoteModel){(remoteModelPath,error)in
guarderror==nil,letremoteModelPath=remoteModelPathelse{return}
do{
interpreter=tryInterpreter(modelPath:remoteModelPath)
}catch{
// Error?
}
}

Objective-C

FIRCustomRemoteModel*remoteModel=
[[FIRCustomRemoteModelalloc]initWithName:@"your_remote_model"];
[[FIRModelManagermodelManager]getLatestModelFilePath:remoteModel
completion:^(NSString*_NullablefilePath,
NSError*_Nullableerror){
if(error!=nil||filePath==nil){return;}
NSError*tfError=nil;
interpreter=[[TFLInterpreteralloc]initWithModelPath:filePatherror:&tfError];
}];

3. Update input and output preparation code

With ModelInterpreter, you specify the model's input and output shapes by passing a ModelInputOutputOptions object to the interpreter when you run it.

For the TensorFlow Lite interpreter, you instead call allocateTensors() to allocate space for the model's input and output, then copy your input data to the input tensors.

For example, if your model has an input shape of [1 224 224 3] float values and an output shape of [1 1000] float values, make these changes:

Before

Swift

letioOptions=ModelInputOutputOptions()
do{
tryioOptions.setInputFormat(
index:0,
type:.float32,
dimensions:[1,224,224,3]
)
tryioOptions.setOutputFormat(
index:0,
type:.float32,
dimensions:[1,1000]
)
}catchleterrorasNSError{
print("Failed to set input or output format with error: \(error.localizedDescription)")
}
letinputs=ModelInputs()
do{
letinputData=Data()
// Then populate with input data.
tryinputs.addInput(inputData)
}catchleterror{
print("Failed to add input: \(error)")
}
interpreter.run(inputs:inputs,options:ioOptions){outputs,errorin
guarderror==nil,letoutputs=outputselse{return}
// Process outputs
// ...
}

Objective-C

FIRModelInputOutputOptions*ioOptions=[[FIRModelInputOutputOptionsalloc]init];
NSError*error;
[ioOptionssetInputFormatForIndex:0
type:FIRModelElementTypeFloat32
dimensions:@[@1,@224,@224,@3]
error:&error];
if(error!=nil){return;}
[ioOptionssetOutputFormatForIndex:0
type:FIRModelElementTypeFloat32
dimensions:@[@1,@1000]
error:&error];
if(error!=nil){return;}
FIRModelInputs*inputs=[[FIRModelInputsalloc]init];
NSMutableData*inputData=[[NSMutableDataalloc]initWithCapacity:0];
// Then populate with input data.
[inputsaddInput:inputDataerror:&error];
if(error!=nil){return;}
[interpreterrunWithInputs:inputs
options:ioOptions
completion:^(FIRModelOutputs*_Nullableoutputs,
NSError*_Nullableerror){
if(error!=nil||outputs==nil){
return;
}
// Process outputs
// ...
}];

After

Swift

do{
tryinterpreter.allocateTensors()
letinputData=Data()
// Then populate with input data.
tryinterpreter.copy(inputData,toInputAt:0)
tryinterpreter.invoke()
}catchleterr{
print(err.localizedDescription)
}

Objective-C

NSError*error=nil;
[interpreterallocateTensorsWithError:&error];
if(error!=nil){return;}
TFLTensor*input=[interpreterinputTensorAtIndex:0error:&error];
if(error!=nil){return;}
NSMutableData*inputData=[[NSMutableDataalloc]initWithCapacity:0];
// Then populate with input data.
[inputcopyData:inputDataerror:&error];
if(error!=nil){return;}
[interpreterinvokeWithError:&error];
if(error!=nil){return;}

4. Update output handling code

Finally, instead of getting the model's output with the ModelOutputs object's output() method, get the output tensor from the interpreter and convert its data to whatever structure is convenient for your use case.

For example, if you're doing classification, you might make changes like the following:

Before

Swift

letoutput=try?outputs.output(index:0)as?[[NSNumber]]
letprobabilities=output?[0]
guardletlabelPath=Bundle.main.path(
forResource:"custom_labels",
ofType:"txt"
)else{return}
letfileContents=try?String(contentsOfFile:labelPath)
guardletlabels=fileContents?.components(separatedBy:"\n")else{return}
foriin0..<labels.count{
ifletprobability=probabilities?[i]{
print("\(labels[i]): \(probability)")
}
}

Objective-C

// Get first and only output of inference with a batch size of 1
NSError*error;
NSArray*probabilites=[outputsoutputAtIndex:0error:&error][0];
if(error!=nil){return;}
NSString*labelPath=[NSBundle.mainBundlepathForResource:@"retrained_labels"
ofType:@"txt"];
NSString*fileContents=[NSStringstringWithContentsOfFile:labelPath
encoding:NSUTF8StringEncoding
error:&error];
if(error!=nil||fileContents==NULL){return;}
NSArray<NSString*>*labels=[fileContentscomponentsSeparatedByString:@"\n"];
for(inti=0;i < labels.count;i++){
NSString*label=labels[i];
NSNumber*probability=probabilites[i];
NSLog(@"%@: %f",label,probability.floatValue);
}

After

Swift

do{
// After calling interpreter.invoke():
letoutput=tryinterpreter.output(at:0)
letprobabilities=
UnsafeMutableBufferPointer<Float32>.allocate(capacity:1000)
output.data.copyBytes(to:probabilities)
guardletlabelPath=Bundle.main.path(
forResource:"custom_labels",
ofType:"txt"
)else{return}
letfileContents=try?String(contentsOfFile:labelPath)
guardletlabels=fileContents?.components(separatedBy:"\n")else{return}
foriinlabels.indices{
print("\(labels[i]): \(probabilities[i])")
}
}catchleterr{
print(err.localizedDescription)
}

Objective-C

NSError*error=nil;
TFLTensor*output=[interpreteroutputTensorAtIndex:0error:&error];
if(error!=nil){return;}
NSData*outputData=[outputdataWithError:&error];
if(error!=nil){return;}
Float32probabilities[outputData.length/4];
[outputDatagetBytes:&probabilitieslength:outputData.length];
NSString*labelPath=[NSBundle.mainBundlepathForResource:@"custom_labels"
ofType:@"txt"];
NSString*fileContents=[NSStringstringWithContentsOfFile:labelPath
encoding:NSUTF8StringEncoding
error:&error];
if(error!=nil||fileContents==nil){return;}
NSArray<NSString*>*labels=[fileContentscomponentsSeparatedByString:@"\n"];
for(inti=0;i < labels.count;i++){
NSLog(@"%@: %f",labels[i],probabilities[i]);
}

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025年09月09日 UTC.