Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commit f24d87d

Browse files
fixed errors in readme
1 parent 7ead20c commit f24d87d

File tree

1 file changed

+47
-30
lines changed

1 file changed

+47
-30
lines changed

‎README.md

Lines changed: 47 additions & 30 deletions
Original file line numberDiff line numberDiff line change
@@ -1,18 +1,16 @@
11
# Tensorflow Lite Object Detection with the Tensorflow Object Detection API
22

3-
![Object Detection Example](doc/object_detection_with_edgetpu.png)
4-
3+
[![TensorFlow 1.15](https://img.shields.io/badge/TensorFlow-1.15-FF6F00?logo=tensorflow)](https://github.com/tensorflow/tensorflow/releases/tag/v1.15.0)
54

5+
![Object Detection Example](doc/object_detection_with_edgetpu.png)
66

7-
# 1.Train a object detection model using the Tensorflow Object Detection API
8-
9-
![Custom Object Detector Example](doc/custom_object_detector.png)
7+
# 1.Train a object detection model using the Tensorflow OD API
108

11-
The first step of creating a object detector that works with Tensorflow Lite is to train a object detector. For a complete step by step guide on how to train your own custom object detector check out [my article](https://gilberttanner.com/blog/creating-your-own-objectdetector).
9+
The first step of creating a object detector that works with Tensorflow Lite is to train a object detector. For a complete step by step guide on how to train your own custom object detector check out [my Github repository](https://github.com/TannerGilbert/Tensorflow-Object-Detection-API-Train-Model/tree/tf1).
1210

1311
# 2.Convert the model to Tensorflow Lite
1412

15-
After you have a Tensorflow Object Detection model you can start to convert it to Tensorflow Lite.
13+
After you have a Tensorflow OD model you can start to convert it to Tensorflow Lite.
1614

1715
This is a three-step process:
1816
1. Export frozen inference graph for TFLite
@@ -23,13 +21,34 @@ This is a three-step process:
2321

2422
After training the model you need to export the model so that the graph architecture and network operations are compatible with Tensorflow Lite. This can be done with the ```export_tflite_ssd_graph.py``` file.
2523

24+
To make these commands easier to run, let’s set up some environment variables:
25+
2626
```bash
27-
mkdir inference_graph
27+
export CONFIG_FILE=PATH_TO_BE_CONFIGURED/pipeline.config
28+
export CHECKPOINT_PATH=PATH_TO_BE_CONFIGURED/model.ckpt-XXXX
29+
export OUTPUT_DIR=/tmp/tflite
30+
```
31+
32+
on Windows use ```set``` instead of ```export```:
2833

29-
python export_inference_graph.py --pipeline_config_path training/faster_rcnn_inception_v2_pets.config --trained_checkpoint_prefix training/model.ckpt-XXXX --output_directory inference_graph --add_postprocessing_op=true
34+
```bash
35+
set CONFIG_FILE=PATH_TO_BE_CONFIGURED/pipeline.config
36+
set CHECKPOINT_PATH=PATH_TO_BE_CONFIGURED/model.ckpt-XXXX
37+
set OUTPUT_DIR=C:<path>/tflite
3038
```
39+
3140
XXXX represents the highest number.
3241

42+
```bash
43+
python export_tflite_ssd_graph.py \
44+
--pipeline_config_path=$CONFIG_FILE \
45+
--trained_checkpoint_prefix=$CHECKPOINT_PATH \
46+
--output_directory=$OUTPUT_DIR \
47+
--add_postprocessing_op=true
48+
```
49+
50+
In the ```OUTPUT_DIR``` you should now see two files: tflite_graph.pb and tflite_graph.pbtxt
51+
3352
## 2.2 Build Tensorflow from source
3453

3554
Now, you need to convert the actual model into an optimized FlatBuffer format that runs efficiently on Tensorflow Lite. This can be done with the Tensorflow Lite Optimizing Converter (TOCO).
@@ -53,35 +72,33 @@ To create a optimized Tensorflow Lite model we need to run TOCO. TOCO is locate
5372
If you want to convert a quantized model you can run the following command:
5473

5574
```bash
56-
export OUTPUT_DIR=/tmp/tflite
5775
bazel run --config=opt tensorflow/lite/toco:toco -- \
58-
--input_file=$OUTPUT_DIR/tflite_graph.pb \
59-
--output_file=$OUTPUT_DIR/detect.tflite \
60-
--input_shapes=1,300,300,3 \
61-
--input_arrays=normalized_input_image_tensor \
62-
--output_arrays='TFLite_Detection_PostProcess','TFLite_Detection_PostProcess:1','TFLite_Detection_PostProcess:2','TFLite_Detection_PostProcess:3' \
63-
--inference_type=QUANTIZED_UINT8 \
64-
--mean_values=128 \
65-
--std_values=128 \
66-
--change_concat_input_ranges=false \
67-
--allow_custom_ops
76+
--input_file=$OUTPUT_DIR/tflite_graph.pb \
77+
--output_file=$OUTPUT_DIR/detect.tflite \
78+
--input_shapes=1,300,300,3 \
79+
--input_arrays=normalized_input_image_tensor \
80+
--output_arrays='TFLite_Detection_PostProcess','TFLite_Detection_PostProcess:1','TFLite_Detection_PostProcess:2','TFLite_Detection_PostProcess:3' \
81+
--inference_type=QUANTIZED_UINT8 \
82+
--mean_values=128 \
83+
--std_values=128 \
84+
--change_concat_input_ranges=false \
85+
--allow_custom_ops
6886
```
6987

70-
If you are using a floating point model like a faster rcnn you'll need to change to command a bit:
88+
If you are using a floating point model you need to change the command:
7189

7290
```bash
73-
export OUTPUT_DIR=/tmp/tflite
7491
bazel run --config=opt tensorflow/lite/toco:toco -- \
75-
--input_file=$OUTPUT_DIR/tflite_graph.pb \
76-
--output_file=$OUTPUT_DIR/detect.tflite \
77-
--input_shapes=1,300,300,3 \
78-
--input_arrays=normalized_input_image_tensor \
79-
--output_arrays='TFLite_Detection_PostProcess','TFLite_Detection_PostProcess:1','TFLite_Detection_PostProcess:2','TFLite_Detection_PostProcess:3' \
80-
--inference_type=FLOAT \
81-
--allow_custom_ops
92+
--input_file=$OUTPUT_DIR/tflite_graph.pb \
93+
--output_file=$OUTPUT_DIR/detect.tflite \
94+
--input_shapes=1,300,300,3 \
95+
--input_arrays=normalized_input_image_tensor \
96+
--output_arrays='TFLite_Detection_PostProcess','TFLite_Detection_PostProcess:1','TFLite_Detection_PostProcess:2','TFLite_Detection_PostProcess:3' \
97+
--inference_type=FLOAT \
98+
--allow_custom_ops
8299
```
83100

84-
If you are working on Windows you might need to remove the ' if the command doesn't work. For more information on how to use TOCO check out [the official instructions](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_on_mobile_tensorflowlite.md).
101+
If things ran successfully, you should now see a third file in the /tmp/tflite directory called detect.tflite.
85102

86103
### 2.3.2 Create new labelmap for Tensorflow Lite
87104

0 commit comments

Comments
(0)

AltStyle によって変換されたページ (->オリジナル) /