|
|
|
This directory contains PyTorch YOLOv3 software developed by Ultralytics LLC, and is freely available for redistribution under the GPL-3.0 license. For more information please visit https://www.ultralytics.com.
The https://github.com/ultralytics/yolov3 repo contains inference and training code for YOLOv3 in PyTorch. The code works on Linux, MacOS and Windows. Training is done on the COCO dataset by default: https://cocodataset.org/#home. Credit to Joseph Redmon for YOLO: https://pjreddie.com/darknet/yolo/.
Python 3.7 or later with the following pip3 install -U -r requirements.txt packages:
numpytorch >= 1.1.0opencv-pythontqdm
Our Jupyter notebook provides quick training, inference and testing examples.
Start Training: python3 train.py to begin training after downloading COCO data with data/get_coco_dataset.sh. Each epoch trains on 117,263 images from the train and validate COCO sets, and tests on 5000 images from the COCO validate set.
Resume Training: python3 train.py --resume to resume training from weights/last.pt.
Plot Training: from utils import utils; utils.plot_results() plots training results from coco_16img.data, coco_64img.data, 2 example datasets available in the data/ folder, which train and test on the first 16 and 64 images of the COCO2014-trainval dataset.
datasets.py applies random OpenCV-powered (https://opencv.org/) augmentation to the input images in accordance with the following specifications. Augmentation is applied only during training, not during inference. Bounding boxes are automatically tracked and updated with the images. 416 x 416 examples pictured below.
| Augmentation | Description |
|---|---|
| Translation | +/- 10% (vertical and horizontal) |
| Rotation | +/- 5 degrees |
| Shear | +/- 2 degrees (vertical and horizontal) |
| Scale | +/- 10% |
| Reflection | 50% probability (horizontal-only) |
| HSV Saturation | +/- 50% |
| HSV Intensity | +/- 50% |
https://cloud.google.com/deep-learning-vm/
Machine type: n1-standard-8 (8 vCPUs, 30 GB memory)
CPU platform: Intel Skylake
GPUs: K80 ($0.20/hr), T4 ($0.35/hr), V100 ($0.83/hr) CUDA with Nvidia Apex FP16/32
HDD: 100 GB SSD
Dataset: COCO train 2014 (117,263 images)
Model: yolov3-spp.cfg
| GPUs | batch_size |
images/sec | epoch time | epoch cost |
|---|---|---|---|---|
| K80 | 64 (32x2) | 11 | 175 min | $0.58 |
| T4 | 64 (32x2) | 40 | 49 min | $0.29 |
| T4 x2 | 64 (64x1) | 61 | 32 min | $0.36 |
| V100 | 64 (32x2) | 115 | 17 min | $0.24 |
| V100 x2 | 64 (64x1) | 150 | 13 min | $0.36 |
| 2080Ti | 64 (32x2) | 81 | 24 min | - |
| 2080Ti x2 | 64 (64x1) | 140 | 14 min | - |
detect.py runs inference on any sources:
python3 detect.py --source ...- Image:
--source file.jpg - Video:
--source file.mp4 - Directory:
--source dir/ - Webcam:
--source 0 - RTSP stream:
--source rtsp://170.93.143.139/rtplive/470011e600ef003a004ee33696235daa - HTTP stream:
--source http://wmccpinetop.axiscam.net/mjpg/video.mjpg
To run a specific models:
YOLOv3: python3 detect.py --cfg cfg/yolov3.cfg --weights yolov3.weights

YOLOv3-tiny: python3 detect.py --cfg cfg/yolov3-tiny.cfg --weights yolov3-tiny.weights

YOLOv3-SPP: python3 detect.py --cfg cfg/yolov3-spp.cfg --weights yolov3-spp.weights

Download from: https://drive.google.com/open?id=1LezFG5g3BCW6iYaV89B2i64cqEUZD7e0
$ git clone https://github.com/ultralytics/yolov3 && cd yolov3
# convert darknet cfg/weights to pytorch model
$ python3 -c "from models import *; convert('cfg/yolov3-spp.cfg', 'weights/yolov3-spp.weights')"
Success: converted 'weights/yolov3-spp.weights' to 'converted.pt'
# convert cfg/pytorch model to darknet weights
$ python3 -c "from models import *; convert('cfg/yolov3-spp.cfg', 'weights/yolov3-spp.pt')"
Success: converted 'weights/yolov3-spp.pt' to 'converted.weights'test.py --weights weights/yolov3.weightstests official YOLOv3 weights.test.py --weights weights/last.pttests latest checkpoint.- mAPs on COCO2014 using pycocotools.
- mAP@0.5 run at
--nms-thres 0.5, mAP@0.5...0.95 run at--nms-thres 0.7. - YOLOv3-SPP ultralytics is
ultralytics68.ptwithyolov3-spp.cfg. - Darknet results published in https://arxiv.org/abs/1804.02767.
| Size | COCO mAP @0.5...0.95 |
COCO mAP @0.5 |
|
|---|---|---|---|
| YOLOv3-tiny YOLOv3 YOLOv3-SPP YOLOv3-SPP ultralytics |
320 | 14.0 28.7 30.5 35.4 |
29.1 51.8 52.3 54.3 |
| YOLOv3-tiny YOLOv3 YOLOv3-SPP YOLOv3-SPP ultralytics |
416 | 16.0 31.2 33.9 39.0 |
33.0 55.4 56.9 59.2 |
| YOLOv3-tiny YOLOv3 YOLOv3-SPP YOLOv3-SPP ultralytics |
512 | 16.6 32.7 35.6 40.3 |
34.9 57.7 59.5 60.6 |
| YOLOv3-tiny YOLOv3 YOLOv3-SPP YOLOv3-SPP ultralytics |
608 | 16.6 33.1 37.0 40.9 |
35.4 58.2 60.7 60.9 |
$ python3 test.py --save-json --img-size 608 --nms-thres 0.7 --weights ultralytics68.pt
Namespace(batch_size=16, cfg='cfg/yolov3-spp.cfg', conf_thres=0.001, data='data/coco.data', device='1', img_size=608, iou_thres=0.5, nms_thres=0.7, save_json=True, weights='ultralytics68.pt')
Using CUDA device0 _CudaDeviceProperties(name='GeForce RTX 2080 Ti', total_memory=11019MB)
Class Images Targets P R mAP@0.5 F1: 100%|███████████████████████████████████████████████████████████████████████████████████| 313/313 [09:46<00:00, 1.09it/s]
all 5e+03 3.58e+04 0.0481 0.829 0.589 0.0894
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.40882
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.60026
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.44551
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.24343
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.45024
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.51362
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.32644
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.53629
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.59343
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.42207
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.63985
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.70688Issues should be raised directly in the repository. For additional questions or comments please email Glenn Jocher at glenn.jocher@ultralytics.com or visit us at https://contact.ultralytics.com.






