Model selection is a deployment decision, not a benchmark decision Selecting an object detection model for production requires evaluating performance under your deployment constraints — hardware, latency budget, classes of interest, expected input distribution — not selecting the model with the highest COCO mAP on the leaderboard. The benchmark score is a useful starting signal, not a decision criterion. This article covers the practical selection process: the main architecture families, their real-world tradeoffs, how to evaluate mAP vs latency for your specific hardware, and the deployment considerations that often matter more than benchmark performance. For video anomaly detection applications that extend beyond standard detection, see production video anomaly detection with generative approaches. Object detection architecture families We have found that three architecture families cover most production deployment scenarios: Single-stage detectors (YOLO family, FCOS, CenterNet): process the image in a single forward pass to produce detections. Fast inference; lower accuracy on small objects; well-optimised for deployment. YOLO variants (YOLOv8, YOLOv9, YOLO11) dominate production deployments that require real-time or near-real-time inference. Two-stage detectors (Faster R-CNN, Cascade R-CNN): first generate region proposals, then classify and refine each proposal. Higher accuracy, especially on small and occluded objects; slower inference (2–5× slower than single-stage at equivalent backbone size). Less common in edge deployments; used in cloud inference pipelines where throughput is more important than per-image latency. Detection transformers (DETR, RT-DETR, DINO): use transformer attention mechanisms for detection without NMS post-processing. DETR-family models initially had high latency; RT-DETR and DINO have closed the gap with YOLO for many scenarios. Strong performance on complex scenes with many interacting objects. Performance comparison Benchmark performance on COCO (80-class detection): Model COCO mAP50-95 Latency (A100 GPU) Params Best For YOLOv8n 37.3 ~1.5ms 3.2M Edge, real-time, resource-constrained YOLOv8m 50.2 ~5.1ms 25.9M Balanced speed/accuracy YOLOv8x 53.9 ~13.2ms 68.2M Accuracy priority, server inference YOLOv9c 53.0 ~6.7ms 25.3M Efficient high-accuracy RT-DETR-L 53.0 ~9.1ms 32.9M Transformer baseline, complex scenes Faster R-CNN R-50 42.0 ~40ms 41.8M Cloud, high accuracy, batch processing DINO-4scale 49.0 ~50ms 47.0M High accuracy, non-real-time These are A100 GPU benchmarks with FP32 precision. On-device performance varies significantly by hardware — see the deployment section for embedded performance. In our experience, YOLOv8m or YOLOv9c is the practical starting point for most production detection deployments: strong accuracy, well-supported inference stack (TensorRT, ONNX, CoreML), active maintenance, and large community for troubleshooting. mAP vs latency: making the tradeoff COCO mAP is a useful comparison tool but has limitations as a production metric: COCO has 80 classes; your task may have 2–5 classes. Performance on your specific classes may not reflect aggregate mAP ranking. COCO contains objects at various scales; if your task is single-scale (e.g., detecting vehicles on a highway), small-object performance differences don’t matter. mAP at IoU threshold 0.5 (mAP50) reflects detection quality at a single overlap criterion; mAP50-95 averages across overlap thresholds. For coarse detection (presence/absence, rough location), mAP50 is sufficient; for precise segmentation or measurement, mAP50-95 matters. The latency-accuracy tradeoff should be evaluated on your hardware with your classes: Select 3–5 candidate models spanning a size/speed range Fine-tune each on your training data (or evaluate zero-shot if applicable) Measure inference latency on your target hardware at the batch size you will use in production Measure accuracy on your held-out test set (not COCO) using the metrics that matter for your application (detection rate, false positive rate, localisation accuracy) Select based on the model that meets your latency budget with the highest accuracy on your task Edge vs cloud deployment considerations The deployment context constrains the model selection as much as the accuracy requirements. Edge deployment (embedded, NVIDIA Jetson, Coral TPU, Hailo) Edge inference has hard constraints: memory budget, thermal envelope, power budget, and often a requirement for INT8 quantisation. Key considerations: Memory: YOLOv8n fits in under 10MB; YOLOv8x requires 130MB+ — matters for devices with limited RAM INT8 quantisation: most edge accelerators (Coral, Hailo, TensorRT) require or strongly prefer INT8 quantised models. Quantisation accuracy loss on object detection is typically 0.5–1.5 mAP points with proper calibration. ONNX export: export and validate ONNX before committing to a model for edge deployment. Some model components (certain attention mechanisms, dynamic operations) have limited ONNX/TensorRT support. TensorRT optimisation: on NVIDIA Jetson, TensorRT typically provides 3–5× throughput improvement over native PyTorch for YOLOv8 models. Approximate NVIDIA Jetson Orin Nano inference performance (INT8): YOLOv8n: ~45–60 FPS at 640px input YOLOv8m: ~15–20 FPS at 640px input YOLOv8x: ~5–8 FPS at 640px input Cloud/server deployment Cloud deployment has fewer hard constraints but requires attention to throughput and cost: Batching: server-side detection should use batch inference for throughput efficiency. YOLOv8m at batch size 8 on an A100 achieves ~250 images/second. GPU cost: model size determines the GPU tier required. YOLOv8n runs efficiently on T4; YOLOv8x requires A10 or A100 for production throughput. Latency vs throughput: cloud inference for real-time applications (live video) requires dedicated GPU allocation; batched cloud inference for offline analytics can use spot/preemptible instances. Pre-deployment model validation Model evaluated on held-out test set from the deployment distribution (not benchmark datasets) Detection rate measured per class — aggregate mAP may mask poor performance on rare classes Confidence threshold calibrated to achieve target precision-recall operating point Inference latency measured on target hardware at production batch size INT8 quantisation accuracy validated if edge deployment requires it ONNX export tested and outputs verified to match native inference False positive rate measured on negative examples from the deployment environment Model handles input at deployment resolution without additional resizing pipeline issues Post-deployment monitoring Confidence score distribution monitored over time (distribution shift detection) Detection rate sampled and validated against human annotations periodically Latency monitored in production (model performance degrades under load) Retraining trigger defined: what event or metric value initiates retraining? What are the common selection mistakes? Selecting on COCO mAP without evaluating on the actual task: COCO rankings do not transfer to domain-specific tasks. A model ranked third on COCO may be best for your specific class set and image distribution. Ignoring deployment hardware until after model selection: selecting YOLOv8x and then discovering it doesn’t meet latency requirements on the target Jetson Nano requires starting the selection process again. Not testing confidence threshold calibration: the default confidence threshold (0.25 in YOLOv8) is not calibrated for production. The threshold needs to be set based on the precision-recall requirement of your application on your validation set. Neglecting NMS tuning: Non-Maximum Suppression (NMS) IoU threshold and confidence threshold interact. Tuning only confidence without considering NMS IoU can cause duplicate detections at high recall settings, which inflates false positive counts in dense scenes.