Two approaches, one inspection station
A manufacturing quality engineer evaluating inspection technology faces a choice that has real production consequences. Traditional machine vision — rule-based, hardware-specific, configured with explicit parameters for illumination, geometry, and threshold — has been the standard for decades. AI-based computer vision — learned from data, adaptable to variation, capable of detecting defects that resist explicit rule definition — is the alternative. Both work. Both fail. The conditions under which each fails are different, and choosing the wrong approach for the wrong conditions produces either a system that is too brittle (machine vision applied to high-variation environments) or a system that is too opaque (computer vision applied to contexts that require deterministic auditability).
What does machine vision do well — and where does it break?
According to Markets and Markets (2024), the machine vision market is valued at approximately $14.3 billion, while the broader AI-based computer vision market exceeds $20 billion. Industry surveys by AIA (Association for Advancing Automation) report that 67% of manufacturers planning new inspection systems evaluate both rule-based and AI-based approaches.
Machine vision systems operate on explicit rules. A camera captures an image. A lighting configuration optimised for the specific inspection task illuminates the target. Image processing algorithms — edge detection, blob analysis, template matching, geometric measurement — extract features defined by the system integrator. A pass/fail decision is made against thresholds calibrated during setup.
The strengths of this approach are real. According to AIA (Association for Advancing Automation), traditional machine vision systems have been deployed in over 2 million industrial inspection stations globally, with the market valued at approximately $14 billion (Markets and Markets, 2024): the system is deterministic (the same input always produces the same output), auditable (every decision step can be traced to a configured parameter), and predictable (performance characteristics are known from commissioning). In regulatory environments — pharmaceutical manufacturing, aerospace, automotive safety-critical components — deterministic auditability is not a convenience, it is a compliance requirement.
The failure mode is equally clear: machine vision breaks when the inspection conditions vary beyond the configured parameters. A new product variant with different geometry requires reconfiguration. A lighting change (bulb degradation, ambient light intrusion, reflective surface variation) shifts the feature extraction and invalidates the calibrated thresholds. A defect type that does not match the configured rule set passes through undetected. Every source of variation that was not anticipated during system setup is a potential blind spot.
We see this pattern particularly in mixed-product manufacturing lines, where the inspection station must handle multiple product variants without manual reconfiguration for each changeover. Machine vision systems that perform excellently on a single product type accumulate complexity — and fragility — as the number of variants increases.
What computer vision adds — and what it costs
AI-based computer vision learns inspection criteria from labelled examples rather than from configured rules. A convolutional neural network trained on thousands of labelled defect and non-defect images develops its own feature representations — representations that can generalise across variation that would break a rule-based system.
The strengths: computer vision handles variation that machine vision cannot. Lighting changes, product variant differences, novel defect types that fall within the learned distribution, and subtle defects that resist explicit rule definition (surface texture anomalies, colour gradients, complex shape deformations) are all addressable through data rather than configuration. The system can be adapted to new conditions by retraining rather than reconfiguring — which for complex inspection tasks can be faster and more reliable.
The costs are not trivial. Training data must be collected, labelled, and quality-assured — and annotation inconsistency directly degrades model performance. The model is not inherently auditable: explaining why a specific unit was classified as defective requires explainability techniques (saliency maps, feature attribution) rather than inspecting a configured threshold. Model behaviour can change when retrained, requiring validation processes that traditional machine vision’s static configuration does not need. And performance depends on training data quality in ways that are non-obvious until the model encounters production conditions the training data did not represent. A 2024 study by Cognex and ABI Research estimates that AI-based visual inspection systems reduce false rejection rates by up to 40% compared to traditional rule-based machine vision in high-variability manufacturing environments.
The pharmaceutical visual inspection deployments illustrate both the capability and the cost: CV-based inspection at production speed achieves consistent detection rates that human inspectors cannot sustain, but the deployment requires annotation protocols, validation frameworks, and monitoring infrastructure that a rule-based system does not.
The decision framework
The choice between machine vision and computer vision is not a technology preference — it is a production engineering decision driven by specific characteristics of the inspection task.
Defect complexity determines the approach. If the defects can be fully characterised by geometric rules — dimensional tolerances, presence/absence checks, barcode readability, fill-level measurement — machine vision is typically sufficient and carries lower deployment and maintenance complexity. If the defects resist rule-based characterisation — surface texture anomalies, variable-shape contamination, subjective quality assessments that human inspectors currently make based on experience — computer vision is likely necessary.
Environmental variation determines the approach. If the inspection environment is tightly controlled — fixed lighting, single product type, stable camera geometry — machine vision’s rule-based approach performs reliably. If the environment varies — multiple product variants on the same line, lighting conditions that change across shifts, product appearance that varies by lot — computer vision’s learned representations handle the variation more robustly.
Regulatory context determines the approach. In environments where deterministic auditability is a compliance requirement, machine vision’s explicit rule chain is an advantage. Computer vision can meet regulatory requirements, but the validation pathway is more complex — the model’s behaviour must be documented through acceptance criteria, ongoing monitoring, and change control processes that account for the model’s learned (rather than configured) decision logic.
Maintenance capability determines feasibility. Machine vision requires system integrators who understand optics, lighting, and image processing algorithms. Computer vision requires data scientists who understand model training, evaluation, and monitoring. The team that will maintain the system after deployment determines which approach is sustainable — deploying a computer vision system without in-house ML capability creates vendor dependency, while deploying a machine vision system without optics expertise creates the same problem in a different domain.
The hybrid option
In practice, the optimal inspection architecture for complex manufacturing environments often combines both approaches. Machine vision handles the inspection tasks where rule-based detection is sufficient and deterministic auditability is required — dimensional checks, presence/absence verification, barcode and label validation. Computer vision handles the inspection tasks where learned detection is necessary — surface defect classification, complex contamination detection, aesthetic quality assessment.
This hybrid architecture allows each technology to operate in its strength zone while avoiding the weaknesses of applying either approach universally. The integration point is the inspection station architecture: both systems share the image acquisition infrastructure, and their results are combined in a unified quality decision that feeds the manufacturing execution system.
The decision of which tasks to assign to each approach requires the same production engineering assessment: defect characterisation, environmental variation analysis, regulatory requirements mapping, and maintenance capability evaluation. If that assessment has not been done — if the choice between machine vision and computer vision is being made based on technology preference rather than production requirements — a Production CV Readiness Assessment maps each inspection task to the appropriate approach. Our computer vision practice focuses on exactly this production engineering question.