What does digital shelf monitoring actually detect? Digital shelf monitoring systems use computer vision to capture and analyse images of retail shelves — either from fixed cameras, mobile robots, or staff-operated devices. The systems detect three categories of events: out-of-stock conditions (empty shelf positions), planogram compliance violations (products in wrong positions), and price tag discrepancies (displayed price does not match system price). Each detection category operates at different accuracy levels because the visual recognition challenge differs. Out-of-stock detection (identifying empty shelf space) achieves 90–95% accuracy in well-lit environments with clear shelf structure. Planogram compliance (identifying specific products and their positions) achieves 80–90% accuracy, limited by visual similarity between products (different SKUs in similar packaging) and occlusion (products hidden behind front-row items). Price tag detection (reading small text on shelf labels) achieves 85–92% accuracy, limited by label condition, lighting angle, and camera resolution. Where does accuracy drop? Condition Out-of-Stock Impact Planogram Impact Price Tag Impact Low/uneven lighting –5% accuracy –10% accuracy –15% accuracy Reflective packaging Minimal –8% accuracy N/A Crowded shelves (no gaps) False positives increase Occlusion increases Labels hidden Camera angle >30° off-axis –3% accuracy –12% accuracy –20% accuracy Damaged/missing shelf labels N/A N/A Detection fails The single largest source of error in our deployments is the gap between controlled test environments and real store conditions. Shelf monitoring systems trained and validated in a laboratory achieve 95%+ accuracy. Deployed in a store with variable lighting, customer traffic, partial product facings, and seasonal display changes, accuracy drops by 5–15% depending on the detection category. For the broader principles of building observable CV pipelines that maintain accuracy in deployment, our guide to CV pipeline observability for retail covers the monitoring architecture. How do you build a useful shelf monitoring system? The technical architecture for shelf monitoring: edge cameras capture images at scheduled intervals (every 15–60 minutes, or triggered by motion detection). Images are processed either on-edge (using embedded GPU devices like NVIDIA Jetson) or transmitted to a central server for batch processing. Detection results are integrated with the retailer’s inventory management system to generate alerts for store staff. Our recommendation for retailers evaluating shelf monitoring: start with out-of-stock detection only. This is the highest-accuracy detection category and the highest-value use case (out-of-stocks directly reduce revenue). Planogram compliance and price verification can be added incrementally once the camera infrastructure and operational workflows are established. The ROI calculation for shelf monitoring depends on store size, SKU count, and current out-of-stock rate. Industry data indicates average out-of-stock rates of 5–8% in grocery retail, with each out-of-stock event costing $50–$150 in lost daily sales for the affected SKU. A store with 10,000 SKUs and a 6% out-of-stock rate has approximately 600 out-of-stock events at any time. Reducing the out-of-stock rate by 2 percentage points through faster detection and response recovers $10K–$30K in monthly revenue — typically exceeding the monitoring system’s cost within 6–12 months. How do you deploy shelf monitoring cameras effectively? Camera placement is the most impactful design decision in shelf monitoring systems — more impactful than model selection or hardware specification. Incorrect camera placement produces images that even the best model cannot analyse accurately. The placement principles: cameras should be positioned perpendicular to the shelf face at a distance that captures the full shelf section at a resolution where individual product labels are legible. For standard retail shelving (1.2m wide sections, 1.8m tall), this means cameras mounted 1.5–2.5m from the shelf face, angled to cover 2–3 shelf sections with minimal perspective distortion. Fixed ceiling-mounted cameras provide continuous monitoring but require one camera per 2–3 shelf sections — a large number of cameras for a full store. Mobile robot platforms (shelf-scanning robots) reduce the camera count to 1–2 per robot but introduce scheduling complexity and coverage gaps between scan cycles. Staff-operated devices (smartphone or tablet-based capture) have the lowest infrastructure cost but the highest operational cost — staff time is required for every scan. We have found that staff-operated scanning achieves useful results during the initial evaluation phase (proving the value of shelf monitoring before investing in fixed infrastructure) but is not sustainable for continuous monitoring in stores with more than 50 shelf sections. For retailers implementing shelf monitoring for the first time, our recommended deployment sequence is: (1) staff-operated capture for 4–6 weeks to validate out-of-stock detection value and train the model on store-specific imagery, (2) fixed cameras in the highest-value sections (categories with the highest out-of-stock cost), (3) expansion to full-store coverage based on measured ROI from the initial sections. The integration with inventory management systems is critical for operational value. Detection alerts that appear only in the shelf monitoring dashboard are frequently ignored by store staff. Alerts that appear in the existing task management system (the tool staff already check and respond to) achieve 3–5× higher response rates. We design integrations that push detection alerts into the retailer’s existing workflow tools rather than requiring staff to monitor a separate system.