

Reproduce by python export.py -weights yolov5s-seg.pt -include engine -device 0 -half Export to ONNX at FP32 and TensorRT at FP16 done with export.py.Reproduce by python segment/val.py -data coco.yaml -weights yolov5s-seg.pt -batch 1 Values indicate inference speed only (NMS adds about 1ms per image). Speed averaged over 100 inference images using a Colab Pro A100 High-RAM instance.Reproduce by python segment/val.py -data coco.yaml -weights yolov5s-seg.pt Accuracy values are for single-model single-scale on COCO dataset.All checkpoints are trained to 300 epochs with SGD optimizer with lr0=0.01 and weight_decay=5e-5 at image size 640 and all default settings.We ran all speed tests on Google Colab Pro notebooks for easy reproducibility. We exported all models to ONNX FP32 for CPU speed tests and to TensorRT FP16 for GPU speed tests. We trained YOLOv5 segmentations models on COCO for 300 epochs at image size 640 using A100 GPUs. Reproduce by python val.py -data coco.yaml -img 1536 -iou 0.7 -augment TTA Test Time Augmentation includes reflection and scale augmentations.Reproduce by python val.py -data coco.yaml -img 640 -task speed -batch 1 Speed averaged over COCO val images using a AWS p3.2xlarge instance.Reproduce by python val.py -data coco.yaml -img 640 -conf 0.001 -iou 0.65 mAP val values are for single-model single-scale on COCO val2017 dataset.Nano and Small models use hyps, all others use. All checkpoints are trained to 300 epochs with default settings.Reproduce by python val.py -task study -data coco.yaml -iou 0.7 -weights yolov5n6.pt yolov5s6.pt yolov5m6.pt yolov5l6.pt yolov5圆.pt.EfficientDet data from google/automl at batch size 8.GPU Speed measures average inference time per image on COCO val2017 dataset using a AWS p3.2xlarge V100 instance at batch-size 32.COCO AP val denotes metric measured on the 5000-image COCO val2017 dataset over various inference sizes from 256 to 1536.YOLOv5 has been designed to be super easy to get started and simple to learn.
