You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+21-21Lines changed: 21 additions & 21 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,11 +1,11 @@
1
1
# Tensorflow_API-Custom_object_detection
2
-
A sample project to detect custom object using Tensorflow object detection API
2
+
A sample project to detect the custom object using Tensorflow object detection API
3
3
4
4
5
5
## Folder Structure
6
6
- Tensorflow_API-Custom_object_detection
7
7
- pre_trained_models
8
-
-*downloaded files for choosen pretrained model will come here*
8
+
-*downloaded files for the choosen pre-trained model will come here*
9
9
- dataset
10
10
- Annotations
11
11
-*Annotations for your training images will come here*
@@ -16,9 +16,9 @@ A sample project to detect custom object using Tensorflow object detection API
16
16
- lable.pbtxt
17
17
- train.record
18
18
- IG
19
-
-*inference graph of trained model will be saved here*
19
+
-*inference graph of the trained model will be saved here*
20
20
- CP
21
-
-*checkpoints of trained model will be saved here*
21
+
-*checkpoints of the trained model will be saved here*
22
22
- eval.ipynb
23
23
- train.ipynb
24
24
-*config file for the choosen model*
@@ -27,49 +27,49 @@ A sample project to detect custom object using Tensorflow object detection API
27
27
## Steps
28
28
29
29
#### Create folders
30
-
Create the folders following the structure given above (You could use different name for any of the folders if you want)
30
+
Create the folders following the structure given above (You could use a different name for any of the folders if you want)
31
31
32
32
33
33
#### Prepare train and test images
34
-
This repository contains train and test images for detection of "UE Roll" blue bluetooth speaker but I will highly recommend you to create your own dataset. Pick up an object you want to detect and take some pics of it with varying backgrounds, angles and distances. Some of the sample images used in this sample project is given below:
34
+
This repository contains train and test images for detection of "UE Roll" blue bluetooth speaker but I will highly recommend you to create your own dataset. Pick up an object you want to detect and take some pics of it with varying backgrounds, angles and distances. Some of the sample images used in this sample project are given below:
Once you have captured images, transfer it to your PC and resize it to smaller size (given images have size of 605 x 454) so that your training will go smoothly without running out of memory. Now rename (for better referencing later) and divide your captured images in to two chunks, one chunk for training(80%) and other for testing(20%). Finally move training images in to *JPEGImages* folder and testing images in to*testImages* folder.
38
+
Once you have captured images, transfer it to your PC and resize it to a smaller size (given images have the size of 605 x 454) so that your training will go smoothly without running out of memory. Now rename (for better referencing later) and divide your captured images into two chunks, one chunk for training(80%) and other for testing(20%). Finally, move training images into *JPEGImages* folder and testing images into*testImages* folder.
39
39
40
40
41
41
#### Label the data
42
-
Now its time to label your training data. We will be doing it using [labelImg library](https://pypi.org/project/labelImg/). To download this library along with its dependencies go to [THIS LINK](https://github.com/tzutalin/labelImg).
43
-
Once you have labelImg library downloaded on your PC, run lableImg.py. Select *JPEGImages* directory by clicking on *Open Dir* and change your save directory to *Annotations* by clicking on *Change Save Dir*. Now all you need to do is to draw rectangles around the object you are planning to detect. You will need to click on *Create RectBox* and then you will get cursor to label the objects. After drawing rectangle around objects, give the name for label and save it so that Annotations will get saved as .xml file in *Annotations* folder.
42
+
Now its time to label your training data. We will be doing it using the [labelImg library](https://pypi.org/project/labelImg/). To download this library along with its dependencies go to [THIS LINK](https://github.com/tzutalin/labelImg).
43
+
Once you have the labelImg library downloaded on your PC, run lableImg.py. Select *JPEGImages* directory by clicking on *Open Dir* and change the save directory to *Annotations* by clicking on *Change Save Dir*. Now all you need to do is to draw rectangles around the object you are planning to detect. You will need to click on *Create RectBox* and then you will get the cursor to label the objects. After drawing rectangles around objects, give the name for the label and save it so that Annotations will get saved as the .xml file in *Annotations* folder.
Once you have cloned this repository, change your present working directory to models/reserarch/ and add it to your python path. It you want to add it permanently then you will have to make change in your .bashrc file or you could add it temproarily for current session using following command:
53
+
Once you have cloned this repository, change your present working directory to models/reserarch/ and add it to your python path. If you want to add it permanently then you will have to make the change in your .bashrc file or you could add it temporarily for current session using the following command:
54
54
```
55
55
export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim
56
56
```
57
-
You also need to run following command in order to get rid of string_int_label_map_pb2 issue (more details [HERE](https://github.com/tensorflow/models/issues/1595))
57
+
You also need to run following command in order to get rid of the *string_int_label_map_pb2* issue (more details [HERE](https://github.com/tensorflow/models/issues/1595))
Now your Environment is all set to use Tensorlow object detection API
62
62
63
63
64
-
#### Convert you data to Tensorflow record format
65
-
In order to use Tensorflow API you need to feed data in Tensorflow record format. Thankfully tensorflow gives python script to convert Pascal VOC format dataset to tensorflow record format. Path for the file I mentioned in last line is given below:
64
+
#### Convert the data to Tensorflow record format
65
+
In order to use Tensorflow API, you need to feed data in Tensorflow record format. Thankfully Tensorflow gives python script to convert Pascal VOC format dataset to Tensorflow record format. Path for the file I mentioned in last line is given below:
Now you have two options, either follow pascal VOC dataset format or modify the tesorflow script as per your need. I modified the script and I have placed same in this repository inside the folder named as *extra*. All you need to do is to take this script and replace original script with this. If you do so, you dont need to follow any specific format.
68
-
After doing all this circus, one last thing is still remaining before we get our Tensorflow record file. You need to create a file for label map, in this repo its label.pbtxt, with dictionaly of label and id of objects. Check label.pbtxt given in this repository to undestand the format, its pretty simple (Note: name of label should be same as what you had given while labeling object using labelImg). Now it time to create record file. From models/research as present working directory run following command to create Tensorflow record:
67
+
Now you have two options, either follow Pascal VOC dataset format or modify the Tesorflow script as per your need. I modified the script and I have placed same in this repository inside the folder named as *extra*. All you need to do is to take this script and replace the original script with this. If you do so, you don't need to follow any specific format.
68
+
After doing all this circus, one last thing is still remaining before we get our Tensorflow record file. You need to create a file for label map, in this repo its *label.pbtxt*, with the dictionary of the label and the id of objects. Check *label.pbtxt* given in the repository to undestand the format, its pretty simple (Note: name of the label should be same as what you had given while labeling object using the labelImg). Now it time to create record file. From models/research as present working directory run the following command to create Tensorflow record:
Now that we have data in right format to feed, we could go ahead with training our model. First thing you need to do is to select the pretrained you would like to use. You could check and download pretrained model from [Tensorflow detection model zoo Github page](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md). Once downloaded, extract all file to the folder you had created for saving pretrained model files. Next you need to copy *models/research/sample/configs/<your_model_name.config>* and paste it in your project repo. You need to configure 5 paths in this file. Just open this file and search for PATH_TO_BE_CONFIGURED and replace it with the required path. I used pretrained faster RCNN trained on COCO dataset and I have added modified config file (along with PATH_TO_BE_CONFIGURED as comment above lines which has been modified) for same in this repo. You could also play with other hyperparameters if you want. Now you are all set to train your model, just run following comand with models/research as present working directory
83
+
Now that we have data in the right format to feed, we could go ahead with training our model. The first thing you need to do is to select the pre-trained model you would like to use. You could check and download a pret-rained model from [Tensorflow detection model zoo Github page](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md). Once downloaded, extract all file to the folder you had created for saving the pre-trained model files. Next you need to copy *models/research/sample/configs/<your_model_name.config>* and paste it in the project repo. You need to configure 5 paths in this file. Just open this file and search for PATH_TO_BE_CONFIGURED and replace it with the required path. I used pre-trained faster RCNN trained on COCO dataset and I have added modified config file (along with PATH_TO_BE_CONFIGURED as comment above lines which has been modified) for same in this repo. You could also play with other hyperparameters if you want. Now you are all set to train your model, just run th following command with models/research as present working directory
Let it train till loss will be below 0.1 or even lesser. once you see that loss is as low as you want then give keyboard interupt. Checkpoints will be saved in CP folder. Now its time to generate inference graph from saved checkpoints
91
+
Let it train till loss will be below 0.1 or even lesser. once you see that loss is as low as you want then give keyboard interrupt. Checkpoints will be saved in CP folder. Now its time to generate inference graph from saved checkpoints
92
92
```
93
93
python object_detection/export_inference_graph.py --input_type=image_tensor --pipeline_config_path=<path_to_config_file> --trained_checkpoint_prefix=<path to saved checkpoint> --output_directory=<path_to_the_folder_for_saving_inference_graph>
**Bonus:If you want to train your model using Google colab then check out the train.ipynb file**
99
+
**Bonus:If you want to train your model using Google Colab then check out the *train.ipynb* file**
100
100
101
101
#### Test the trained model
102
-
Finaly it's time to check the result of all the hard work you did. All you need to do is to copy model/research/object_detection/object_detection_tutorial.ipynb and modfy it to work with you inference graph. A modified file is aready given as eval.ipynb with this repo, you just need to change path, number of classes and the number of images you have given as test image. Below is the result of the model trained for detecting "UE Roll" blue bluetooth speaker.
102
+
Finally, it's time to check the result of all the hard work you did. All you need to do is to copy model/research/object_detection/object_detection_tutorial.ipynb and modify it to work with you inference graph. A modified file is already given as eval.ipynb with this repo, you just need to change the path, number of classes and the number of images you have given as test image. Below is the result of the model trained for detecting "UE Roll" blue bluetooth speaker.
0 commit comments