|
283 | 283 | "cell_type": "markdown", |
284 | 284 | "metadata": {}, |
285 | 285 | "source": [ |
286 | | - "Initialize Data Loaders" |
| 286 | + "### Data Loaders\n", |
| 287 | + "\n", |
| 288 | + "One of our main contributions to vanilla YOLOv3 is the custom data loader we implemented:\n", |
| 289 | + "\n", |
| 290 | + "Each set of training images from a specific sensor/lens/perspective combination is uniformly rescaled such that their landmark size distributions matched that of the camera system on the vehicle. Each training image was then padded if too small or split up into multiple images if too large.\n", |
| 291 | + "\n", |
| 292 | + "<p align=\"center\">\n", |
| 293 | + "<img src=\"https://user-images.githubusercontent.com/22118253/69765465-09e90000-1142-11ea-96b7-370868a0033b.png\" width=\"600\">\n", |
| 294 | + "</p>" |
287 | 295 | ] |
288 | 296 | }, |
289 | 297 | { |
|
427 | 435 | " break" |
428 | 436 | ] |
429 | 437 | }, |
| 438 | + { |
| 439 | + "cell_type": "markdown", |
| 440 | + "metadata": {}, |
| 441 | + "source": [ |
| 442 | + "Our full dataset accuracy metrics for detecting traffic cones on the racing track:\n", |
| 443 | + "\n", |
| 444 | + "| mAP | Recall | Precision |\n", |
| 445 | + "|----|----|----|\n", |
| 446 | + "| 89.35% | 92.77% | 86.94% |" |
| 447 | + ] |
| 448 | + }, |
430 | 449 | { |
431 | 450 | "cell_type": "markdown", |
432 | 451 | "metadata": {}, |
|
0 commit comments