Skip to content
forked fromWongKinYiu/yolov7

Implementation of paper - YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors

License

Notifications You must be signed in to change notification settings

roboflow/yolov7

Repository files navigation

Official YOLOv7

Implementation of paper -YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors

PWC Hugging Face Spaces Open In Colab arxiv.org

Web Demo

Performance

MS COCO

Model Test Size APtest AP50test AP75test batch 1 fps batch 32 average time
YOLOv7 640 51.4% 69.7% 55.9% 161fps 2.8ms
YOLOv7-X 640 53.1% 71.2% 57.8% 114fps 4.3ms
YOLOv7-W6 1280 54.9% 72.6% 60.1% 84fps 7.6ms
YOLOv7-E6 1280 56.0% 73.5% 61.2% 56fps 12.3ms
YOLOv7-D6 1280 56.6% 74.0% 61.8% 44fps 15.0ms
YOLOv7-E6E 1280 56.8% 74.4% 62.1% 36fps 18.7ms

Installation

Docker environment (recommended)

Expand
#create the docker container, you can change the share memory size if you have more.
nvidia-docker run --name yolov7 -it -v your_coco_path/:/coco/ -v your_code_path/:/yolov7 --shm-size=64g nvcr.io/nvidia/pytorch:21.08-py3

#apt install required packages
apt update
apt install -y zip htop screen libgl1-mesa-glx

#pip install required packages
pip install seaborn thop

#go to code folder
cd/yolov7

Testing

yolov7.ptyolov7x.ptyolov7-w6.ptyolov7-e6.ptyolov7-d6.ptyolov7-e6e.pt

python test.py --data data/coco.yaml --img 640 --batch 32 --conf 0.001 --iou 0.65 --device 0 --weights yolov7.pt --name yolov7_640_val

You will get the results:

Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.51206
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.69730
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.55521
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.35247
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.55937
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.66693
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.38453
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.63765
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.68772
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.53766
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.73549
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.83868

To measure accuracy, downloadCOCO-annotations for Pycocotoolsto the./coco/annotations/instances_val2017.json

Training

Data preparation

bash scripts/get_coco.sh
  • Download MS COCO dataset images (train,val,test) andlabels.If you have previously used a different version of YOLO, we strongly recommend that you deletetrain2017.cacheandval2017.cachefiles, and redownloadlabels

Single GPU training

#train p5 models
python train.py --workers 8 --device 0 --batch-size 32 --data data/coco.yaml --img 640 640 --cfg cfg/training/yolov7.yaml --weights''--name yolov7 --hyp data/hyp.scratch.p5.yaml

#train p6 models
python train_aux.py --workers 8 --device 0 --batch-size 16 --data data/coco.yaml --img 1280 1280 --cfg cfg/training/yolov7-w6.yaml --weights''--name yolov7-w6 --hyp data/hyp.scratch.p6.yaml

Multiple GPU training

#train p5 models
python -m torch.distributed.launch --nproc_per_node 4 --master_port 9527 train.py --workers 8 --device 0,1,2,3 --sync-bn --batch-size 128 --data data/coco.yaml --img 640 640 --cfg cfg/training/yolov7.yaml --weights''--name yolov7 --hyp data/hyp.scratch.p5.yaml

#train p6 models
python -m torch.distributed.launch --nproc_per_node 8 --master_port 9527 train_aux.py --workers 8 --device 0,1,2,3,4,5,6,7 --sync-bn --batch-size 128 --data data/coco.yaml --img 1280 1280 --cfg cfg/training/yolov7-w6.yaml --weights''--name yolov7-w6 --hyp data/hyp.scratch.p6.yaml

Transfer learning

yolov7_training.ptyolov7x_training.ptyolov7-w6_training.ptyolov7-e6_training.ptyolov7-d6_training.ptyolov7-e6e_training.pt

Single GPU finetuning for custom dataset

#finetune p5 models
python train.py --workers 8 --device 0 --batch-size 32 --data data/custom.yaml --img 640 640 --cfg cfg/training/yolov7-custom.yaml --weights'yolov7_training.pt'--name yolov7-custom --hyp data/hyp.scratch.custom.yaml

#finetune p6 models
python train_aux.py --workers 8 --device 0 --batch-size 16 --data data/custom.yaml --img 1280 1280 --cfg cfg/training/yolov7-w6-custom.yaml --weights'yolov7-w6_training.pt'--name yolov7-w6-custom --hyp data/hyp.scratch.custom.yaml

Re-parameterization

Seereparameterization.ipynb

Inference

On video:

python detect.py --weights yolov7.pt --conf 0.25 --img-size 640 --source yourvideo.mp4

On image:

python detect.py --weights yolov7.pt --conf 0.25 --img-size 640 --source inference/images/horses.jpg

Export

Pytorch to Onnx (Inline and inference with OpenVINO™ Torch-ORT)Open In Colab

In-line conversion to Onnx: https://github.com/pytorch/ort/blob/main/torch_ort_inference/docs/usage.md#additional-apis
Inference:

python detect_ort.py --weights /content/yolov7/runs/best.pt --conf 0.25 --img-size 640 --source UWH-6/test/images/DJI_0021_mp4-32_jpg.rf.0d9b746d8896d042b55a14c8303b4f36.jpg

Pytorch to CoreML (and inference on MacOS/iOS)Open In Colab

Pytorch to ONNX with NMS (and inference)Open In Colab

python export.py --weights yolov7-tiny.pt --grid --end2end --simplify \
--topk-all 100 --iou-thres 0.65 --conf-thres 0.35 --img-size 640 640 --max-wh 640

Pytorch to TensorRT with NMS (and inference)Open In Colab

wget https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7-tiny.pt
python export.py --weights./yolov7-tiny.pt --grid --end2end --simplify --topk-all 100 --iou-thres 0.65 --conf-thres 0.35 --img-size 640 640
git clone https://github.com/Linaom1214/tensorrt-python.git
python./tensorrt-python/export.py -o yolov7-tiny.onnx -e yolov7-tiny-nms.trt -p fp16

Pytorch to TensorRT another wayOpen In Colab

Expand

wget https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7-tiny.pt
python export.py --weights yolov7-tiny.pt --grid --include-nms
git clone https://github.com/Linaom1214/tensorrt-python.git
python./tensorrt-python/export.py -o yolov7-tiny.onnx -e yolov7-tiny-nms.trt -p fp16

#Or use trtexec to convert ONNX to TensorRT engine
/usr/src/tensorrt/bin/trtexec --onnx=yolov7-tiny.onnx --saveEngine=yolov7-tiny-nms.trt --fp16

Tested with: Python 3.7.13, Pytorch 1.12.0+cu113

Pose estimation

codeyolov7-w6-pose.pt

Seekeypoint.ipynb.

Instance segmentation

codeyolov7-mask.pt

Seeinstance.ipynb.

Citation

@article{wang2022yolov7,
title={{YOLOv7}: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors},
author={Wang, Chien-Yao and Bochkovskiy, Alexey and Liao, Hong-Yuan Mark},
journal={arXiv preprint arXiv:2207.02696},
year={2022}
}

Teaser

Yolov7-semantic & YOLOv7-panoptic & YOLOv7-caption

Acknowledgements

Expand

About

Implementation of paper - YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 98.8%
  • Python 1.2%