Two questions that sound the same but are not Verification asks: “Did we build the system correctly?” It confirms that the software meets its design specifications — that every documented requirement has a corresponding implementation and that every implementation produces the expected output under test conditions. Validation asks: “Did we build the correct system?” It confirms that the system, as implemented, meets the user’s actual needs and performs its intended function in the production environment with real users and real data. A system can pass verification and fail validation. A pharmaceutical manufacturing control system may correctly implement every documented requirement (verification passes), yet fail to prevent a specific class of process deviation that the requirements document did not anticipate (validation fails). The two activities are complementary, not interchangeable. The practical difference Dimension Verification Validation Question Does it meet specifications? Does it meet user needs? Timing During development After deployment to production Evidence Test results against specifications Performance data in operational environment Scope Individual requirements, functions, modules Entire system in production context FDA reference Design verification (21 CFR 820) Process validation (21 CFR 211.100) Failure mode Specification gap or coding error Requirements gap or environmental mismatch In regulated pharmaceutical environments, both activities produce documented evidence — verification generates test protocols and execution records; validation generates qualification protocols (IQ/OQ/PQ) and summary reports. Regulatory inspectors review both. How AI systems change the verification/validation boundary For traditional deterministic software, verification is relatively straightforward: define inputs, document expected outputs, run tests, compare results. If outputs match expectations, verification passes. Machine learning models break this pattern. An ML model for pharmaceutical tablet inspection does not have a fixed input-output mapping. Its outputs depend on the model weights (which change with training data), the input distribution (which varies with production conditions), and the inference environment (which may differ from the development environment). Verification of an ML model must address: Architecture verification: Does the model architecture match the design specification? Training verification: Was the model trained on the specified dataset, with the specified hyperparameters, for the specified number of epochs? Performance verification: Does the model achieve the specified accuracy, precision, and recall on a held-out test set? Validation for the same ML model must address: Operational performance: Does the model maintain its test-set performance when deployed in the production environment with real-time data? Robustness: Does the model perform acceptably under the range of conditions it will encounter (lighting variation, product variation, equipment aging)? Drift detection: Will the monitoring system detect when model performance degrades below acceptance thresholds? The distinction between CSA and full CSV approaches determines how these verification and validation activities are documented and how much effort is proportionate for a given system’s risk level. Getting the sequence right Verification before validation is not just a best practice — it is a logical dependency. Validating a system that has not been verified means testing whether the system meets user needs without first confirming that it was built correctly. If validation fails, you cannot determine whether the failure is a requirements gap (validation problem) or an implementation error (verification problem) without going back to verify first. For AI systems in pharmaceutical manufacturing, the recommended sequence is: verify the training pipeline → verify model architecture and performance → validate in a production-representative environment → deploy with continuous monitoring → revalidate when drift or changes are detected. Each step produces documented evidence that feeds the regulatory submission package. How do verification and validation apply differently to AI systems? For traditional software, verification confirms that each module performs its specified function (unit tests, integration tests), and validation confirms that the complete system meets user needs (user acceptance testing). The distinction is clear because traditional software has deterministic specifications that can be verified individually. AI systems complicate this distinction. The ML model does not have individual function specifications in the traditional sense — it has training objectives and performance metrics. Verifying that the model was trained correctly (correct data, correct hyperparameters, correct evaluation metrics) is straightforward. Verifying that the model produces correct outputs for specific inputs requires a test dataset with known-correct labels, which is a form of validation, not verification. Our approach separates the AI system into two validation domains: the deterministic software infrastructure (data pipelines, API endpoints, user interfaces, audit trails) and the ML model (prediction accuracy, consistency, robustness). The software infrastructure undergoes traditional verification and validation. The ML model undergoes performance qualification — a structured evaluation on a representative test dataset that demonstrates the model meets predefined acceptance criteria. This dual-domain approach satisfies regulatory expectations because inspectors see the familiar validation framework applied to the software components, supplemented by a performance qualification study for the AI-specific component. The performance qualification study includes acceptance criteria defined before testing (not adjusted after seeing results), a test dataset that is independent of the training data, and statistical analysis of performance metrics with documented confidence intervals. This approach has been accepted in FDA and MHRA inspections for AI-based quality inspection systems we have deployed.