The gap between demo and production Face recognition demonstrations typically use controlled conditions: frontal face, adequate lighting, high-resolution close-up camera, cooperative subject. CCTV footage has none of these. Faces appear at distance, at angle, under variable and often poor lighting, partially occluded, and moving. The result is that production face recognition from CCTV footage performs substantially worse than controlled-condition benchmarks, often by a factor that makes operational deployment of real-time recognition impractical except in specific constrained scenarios. Understanding why production performance degrades — and where it does not — is essential for making sensible decisions about deploying face recognition in a CCTV context. For the production deployment of recognition systems more broadly, see building a production SKU recognition system, which covers similar pipeline architecture challenges. What does this mean in practice? Face recognition algorithms require a minimum face size in pixels to extract reliable embeddings. Below this threshold, the model has insufficient information — accuracy degrades rapidly, and false positive rates rise. Minimum face size requirements by algorithm and use case: Use Case Minimum Inter-Ocular Distance Approximate Face Height Notes Detection only (is there a face?) 10–20 pixels 30–50 pixels Reliable detection; no recognition Low-confidence recognition 30–40 pixels 70–90 pixels Recognition possible; high error rates Operational recognition (1:1 verification) 60–80 pixels 140–180 pixels Practical accuracy for controlled scenarios Watchlist matching (1:N search) 80–120 pixels 180–280 pixels Required for acceptable FAR in large galleries High-confidence identification 120+ pixels 280+ pixels Approaches benchmark accuracy levels A person standing 5 metres from a standard 1080p camera with a 4mm lens (typical for indoor CCTV) fills approximately 80–120 pixels of face height. At 10 metres, this drops to 40–60 pixels — below the threshold for reliable recognition. At 15–20 metres, face recognition is not operationally viable without specialised long-range cameras with telephoto lenses. The implication: most building CCTV cameras are not positioned for face recognition. They are positioned for scene coverage. Retrofitting recognition onto existing camera infrastructure typically yields poor results because the cameras are too far from subjects. Angle and occlusion Face recognition models are trained predominantly on frontal and near-frontal face images. Performance degrades with yaw (side-to-side rotation) and pitch (up-down tilt): Up to ±15° yaw: recognition accuracy close to frontal baseline ±15–30° yaw: moderate accuracy degradation (typically 10–20% drop in verification accuracy) ±30–45° yaw: significant degradation; recognition is unreliable for watchlist matching 45° yaw (near-profile): recognition not viable with standard models In building CCTV, people rarely present a frontal face to cameras. Corridor-mounted cameras see the tops of heads; entry cameras may see a mix of frontal and angled views depending on approach geometry. The only camera position that consistently generates usable face data is at the entry point of a controlled access lane where subjects stop, look forward, and are close enough to fill the face size requirement. Partial occlusion (glasses, masks, hats, hair over face) reduces recognition accuracy significantly. Post-pandemic deployments that were designed before widespread mask use faced substantial accuracy degradation when masks became common. False positive rates in production False positive rate in face recognition (the probability that a genuine non-match is incorrectly identified as a match) is the operational risk metric. In watchlist applications — matching captured faces against a list of persons of interest — false positives mean incorrectly flagging innocent people. The false positive rate is a function of the matching threshold and the size of the gallery. A threshold calibrated for 0.1% FAR in a 1:1 verification scenario will produce a higher effective FAR in 1:N watchlist matching because there are more opportunities for a false match across the gallery. Across our deployments, realistic false positive rates in CCTV watchlist matching with operational constraints (variable angles, mixed image quality, diverse subjects): 1–5% FAR at matching thresholds that achieve 80% true positive rate 0.1–1% FAR at thresholds that achieve 50–60% true positive rate In a high-throughput deployment — a shopping centre processing thousands of face detections per day — a 1% FAR generates tens of false flags per day. Each false flag requires human review; the cumulative alert volume creates operational unsustainability. GDPR and biometric compliance Face recognition in the EU is subject to GDPR Article 9, which treats biometric data processed for identification purposes as a special category requiring explicit legal basis. The lawful basis options that are practically available for CCTV face recognition are limited: Explicit consent: practically difficult for general surveillance; appropriate for access control with voluntary enrolment Vital interests: very narrow; not applicable to general security use Substantial public interest (Article 9(2)(g)): requires specific national law provision; not a catch-all authorisation Legitimate interests: contested for surveillance purposes; Data Protection Authorities in France, UK, and Sweden have ruled against legitimate interests as a basis for mass biometric surveillance The practical compliance path for CCTV face recognition: Define the specific purpose (watchlist matching for loss prevention, access control for enrolled employees) Conduct a Data Protection Impact Assessment (DPIA) before deployment — mandatory under GDPR Article 35 for systematic processing of biometric data Establish a specific legal basis for the specific purpose Minimise the biometric data retained (embeddings rather than face images, defined retention periods) Provide required transparency notices Retailers and building operators in the EU who have deployed face recognition without a completed DPIA and explicit legal basis have faced enforcement action from national DPAs. This is not a theoretical risk. CCTV face recognition compliance checklist Specific purpose defined (not “security in general”) Legal basis identified and documented for the specific processing purpose DPIA completed and filed before deployment Transparency notices placed at entry points Enrolment process documented (how are subjects added to watchlist/access list?) Retention period defined for biometric embeddings and face images Subject rights procedures in place (access, deletion, correction) Vendor data processing agreement reviewed for GDPR compliance Where CCTV face recognition actually works Despite the challenges, face recognition from CCTV is operationally viable in specific, constrained scenarios: Controlled access control lanes: single exit/entry point, cooperative subject, camera positioned at 1–3 metres, frontal orientation enforced by physical design Small gallery matching: watchlist of under 100 known individuals in a specific venue context (staff access, VIP recognition in controlled environments) Post-incident investigation: after-the-fact matching of captured face images against suspect gallery, with human expert review — not automated real-time alerting High-value asset areas: small zones with dedicated high-resolution cameras positioned for face-compatible geometry General deployment of face recognition analytics across building CCTV infrastructure — expecting to identify individuals from standard ceiling-mounted cameras — does not deliver operational results that justify the cost and compliance burden in our experience.