Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commit dc11af5

Browse files
added Tensorflow Lite conversion steps
1 parent a818a58 commit dc11af5

File tree

1 file changed

+97
-1
lines changed

1 file changed

+97
-1
lines changed

‎README.md‎

Lines changed: 97 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -301,12 +301,108 @@ python export_inference_graph.py --input_type image_tensor --pipeline_config_pat
301301

302302
XXXX represents the highest number.
303303

304-
### 8. Using the model for inference
304+
### 8. Exporting Tensorflow Lite model
305+
306+
If you want to run the model on a edge device like a Raspberry Pi or if you want to run it on a smartphone it's a good idea to convert your model to Tensorflow Lite format. This can be done with with the ```export_tflite_ssd_graph.py``` file.
307+
308+
```bash
309+
mkdir inference_graph
310+
311+
python export_inference_graph.py --pipeline_config_path training/faster_rcnn_inception_v2_pets.config --trained_checkpoint_prefix training/model.ckpt-XXXX --output_directory inference_graph --add_postprocessing_op=true
312+
```
313+
314+
After executing the command, there should be two new files in the inference_graph folder. A tflite_graph.pb and a tflite_graph.pbtxt file.
315+
316+
Now you have a graph architecture and network operations that are compatible with Tensorflow Lite. To finish the convertion you now need to convert the actual model.
317+
318+
### 9. Using TOCO to Create Optimzed TensorFlow Lite Model
319+
320+
To convert the frozen graph to Tensorflow Lite we need to run it through the Tensorflow Lite Optimizing Converter (TOCO). TOCO converts the model into an optimized FlatBuffer format that runs efficiently on Tensorflow Lite.
321+
322+
For this to work you need to have Tensorflow installed from scratch. This is a tedious task which I wouldn't cover in this tutorial. But you can follow the [official installation guide](https://www.tensorflow.org/install/source_windows). I'd recommend you to create a [Anaconda Environment](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html) specificly for this purpose.
323+
324+
After building Tensorflow from scratch you're ready to start the with the conversation.
325+
326+
#### 9.1 Create Tensorflow Lite model
327+
328+
To create a optimized Tensorflow Lite model we need to run TOCO. TOCO is locate in the tensorflow/lite directory, which you should have after install Tensorflow from source.
329+
330+
If you want to convert a quantized model you can run the following command:
331+
332+
```bash
333+
export OUTPUT_DIR=/tmp/tflite
334+
bazel run --config=opt tensorflow/lite/toco:toco -- \
335+
--input_file=$OUTPUT_DIR/tflite_graph.pb \
336+
--output_file=$OUTPUT_DIR/detect.tflite \
337+
--input_shapes=1,300,300,3 \
338+
--input_arrays=normalized_input_image_tensor \
339+
--output_arrays='TFLite_Detection_PostProcess','TFLite_Detection_PostProcess:1','TFLite_Detection_PostProcess:2','TFLite_Detection_PostProcess:3' \
340+
--inference_type=QUANTIZED_UINT8 \
341+
--mean_values=128 \
342+
--std_values=128 \
343+
--change_concat_input_ranges=false \
344+
--allow_custom_ops
345+
```
346+
347+
If you are using a floating point model like a faster rcnn you'll need to change to command a bit:
348+
349+
bazel run --config=opt tensorflow/lite/toco:toco -- --input_file=$OUTPUT_DIR/tflite_graph.pb --output_file=$OUTPUT_DIR/detect.tflite --input_shapes=1,300,300,3 --input_arrays=normalized_input_image_tensor --output_arrays=TFLite_Detection_PostProcess,TFLite_Detection_PostProcess:1,TFLite_Detection_PostProcess:2,TFLite_Detection_PostProcess:3 --inference_type=FLOAT --allow_custom_ops
350+
351+
```bash
352+
export OUTPUT_DIR=/tmp/tflite
353+
bazel run --config=opt tensorflow/lite/toco:toco -- \
354+
--input_file=$OUTPUT_DIR/tflite_graph.pb \
355+
--output_file=$OUTPUT_DIR/detect.tflite \
356+
--input_shapes=1,300,300,3 \
357+
--input_arrays=normalized_input_image_tensor \
358+
--output_arrays='TFLite_Detection_PostProcess','TFLite_Detection_PostProcess:1','TFLite_Detection_PostProcess:2','TFLite_Detection_PostProcess:3' \
359+
--inference_type=FLOAT \
360+
--allow_custom_ops
361+
```
362+
363+
If you are working on Windows you might need to remove the ' if the command doesn't work. For more information on how to use TOCO check out [the official instructions](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_on_mobile_tensorflowlite.md).
364+
365+
#### 9.2 Create new labelmap for Tensorflow Lite
366+
367+
Next you need to create a label map for Tensorflow Lite, since it doesn't have the same format as a classical Tensorflow labelmap.
368+
369+
Tensorflow labelmap:
370+
371+
```bash
372+
item {
373+
name: "a"
374+
id: 1
375+
display_name: "a"
376+
}
377+
item {
378+
name: "b"
379+
id: 2
380+
display_name: "b"
381+
}
382+
item {
383+
name: "c"
384+
id: 3
385+
display_name: "c"
386+
}
387+
```
388+
389+
The Tensorflow Lite labelmap format only has the display_names (if there is no display_name the name is used).
390+
391+
```bash
392+
a
393+
b
394+
c
395+
```
396+
397+
So basically the only thing you need to do is to create a new labelmap file and copy the display_names (names) from the other labelmap file into it.
398+
399+
### 10. Using the model for inference
305400

306401
After training the model it can be used in many ways. For examples on how to use the model check out my other repositories.
307402

308403
* [Inference with Tensorflow 1.x](https://github.com/TannerGilbert/Tutorials/tree/master/Tensorflow%20Object%20Detection)
309404
* [Tensorflow-Object-Detection-with-Tensorflow-2.0](https://github.com/TannerGilbert/Tensorflow-Object-Detection-with-Tensorflow-2.0)
405+
* [Run TFLite model with EdgeTPU](https://github.com/TannerGilbert/Google-Coral-Edge-TPU/blob/master/tflite_object_detection.py)
310406

311407
## Appendix
312408

0 commit comments

Comments
(0)

AltStyle によって変換されたページ (->オリジナル) /