You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# 1.Train a object detection model using the Tensorflow OD API
10
8
11
-
The first step of creating a object detector that works with Tensorflow Lite is to train a object detector. For a complete step by step guide on how to train your own custom object detector check out [my article](https://gilberttanner.com/blog/creating-your-own-objectdetector).
9
+
The first step of creating a object detector that works with Tensorflow Lite is to train a object detector. For a complete step by step guide on how to train your own custom object detector check out [my Github repository](https://github.com/TannerGilbert/Tensorflow-Object-Detection-API-Train-Model/tree/tf1).
12
10
13
11
# 2.Convert the model to Tensorflow Lite
14
12
15
-
After you have a Tensorflow Object Detection model you can start to convert it to Tensorflow Lite.
13
+
After you have a Tensorflow OD model you can start to convert it to Tensorflow Lite.
16
14
17
15
This is a three-step process:
18
16
1. Export frozen inference graph for TFLite
@@ -23,13 +21,34 @@ This is a three-step process:
23
21
24
22
After training the model you need to export the model so that the graph architecture and network operations are compatible with Tensorflow Lite. This can be done with the ```export_tflite_ssd_graph.py``` file.
25
23
24
+
To make these commands easier to run, let’s set up some environment variables:
set CONFIG_FILE=PATH_TO_BE_CONFIGURED/pipeline.config
36
+
set CHECKPOINT_PATH=PATH_TO_BE_CONFIGURED/model.ckpt-XXXX
37
+
set OUTPUT_DIR=C:<path>/tflite
30
38
```
39
+
31
40
XXXX represents the highest number.
32
41
42
+
```bash
43
+
python export_tflite_ssd_graph.py \
44
+
--pipeline_config_path=$CONFIG_FILE \
45
+
--trained_checkpoint_prefix=$CHECKPOINT_PATH \
46
+
--output_directory=$OUTPUT_DIR \
47
+
--add_postprocessing_op=true
48
+
```
49
+
50
+
In the ```OUTPUT_DIR``` you should now see two files: tflite_graph.pb and tflite_graph.pbtxt
51
+
33
52
## 2.2 Build Tensorflow from source
34
53
35
54
Now, you need to convert the actual model into an optimized FlatBuffer format that runs efficiently on Tensorflow Lite. This can be done with the Tensorflow Lite Optimizing Converter (TOCO).
@@ -53,35 +72,33 @@ To create a optimized Tensorflow Lite model we need to run TOCO. TOCO is locate
53
72
If you want to convert a quantized model you can run the following command:
54
73
55
74
```bash
56
-
export OUTPUT_DIR=/tmp/tflite
57
75
bazel run --config=opt tensorflow/lite/toco:toco -- \
If you are working on Windows you might need to remove the ' if the command doesn't work. For more information on how to use TOCO check out [the official instructions](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_on_mobile_tensorflowlite.md).
101
+
If things ran successfully, you should now see a third file in the /tmp/tflite directory called detect.tflite.
0 commit comments