Validation is evidence, not testing GxP validation is the documented process of demonstrating that a system consistently performs according to predetermined specifications and quality attributes. For software in pharmaceutical environments, this means proving — with traceable evidence — that the system does what it claims to do, does not do what it should not do, and maintains data integrity throughout its operational lifecycle. The distinction between validation and testing matters. Testing verifies that specific functions work under specific conditions. Validation demonstrates that the entire system is fit for its intended use in the production environment, with the users who will operate it, processing the data types it will encounter. A system can pass every unit test and still fail validation if its operational context was never assessed. The traditional validation lifecycle Traditional GxP validation follows a V-model with three qualification stages: Stage Full name Purpose Evidence IQ Installation Qualification System installed correctly per specifications Hardware/software inventory, version verification, environment checks OQ Operational Qualification System operates correctly under expected conditions Functional tests, boundary tests, error handling, security verification PQ Performance Qualification System performs reliably in production context End-to-end workflows, user acceptance, stress testing, data integrity checks Each stage produces documentation — protocols, execution records, deviation reports, summary reports — that forms the validation evidence package. This package must be available for regulatory inspection at any time during the system’s operational life. For deterministic software (ERP systems, LIMS, MES), this lifecycle works well. The system is installed once, validated once, and re-validated only when changes occur. The V-model assumes that a system validated at deployment remains valid until modified. Why AI systems break the traditional model Machine learning systems are not deterministic. A computer vision model trained to detect particulate contamination in vials will produce different outputs as its model weights change, as new training data is incorporated, and as the input distribution shifts (different lighting conditions, new product formats, camera degradation). This means the fundamental assumption of traditional validation — validate once, monitor for changes — does not hold. The FDA’s 2022 Computer Software Assurance (CSA) guidance and the GAMP 5 Second Edition both acknowledge this gap. CSA replaces the documentation-first mindset with a risk-based approach: systems that directly affect product quality require thorough assurance activities, while systems with lower risk require proportionate effort. For AI systems, this translates to continuous validation — ongoing monitoring of model performance against predetermined acceptance criteria, with triggered revalidation when drift is detected. A practical continuous validation framework for AI in pharma includes three components: performance monitoring (accuracy, precision, recall tracked against baselines), drift detection (statistical comparison of incoming data distributions against training data), and triggered requalification (formal reassessment when performance drops below acceptance thresholds). Understanding when to apply full CSV versus the lighter CSA approach is a risk-based decision that depends on system classification, not a blanket policy choice. The cost of validation done wrong Over-validation wastes engineering resources without improving compliance posture. Under-validation creates regulatory exposure that surfaces during inspections. The risk-based approach is not optional — it is the current regulatory expectation from both FDA and EMA. Organisations that still apply uniform full CSV to every system, regardless of risk, are spending validation budget on low-risk systems while potentially under-resourcing validation of high-risk AI systems where the compliance exposure actually sits. What does validation look like for modern cloud-based pharma systems? Cloud-based pharmaceutical systems add infrastructure validation to the standard software validation lifecycle. The cloud provider’s infrastructure (compute, storage, networking) must be qualified as suitable for GxP use. Major cloud providers (AWS, Azure, GCP) provide GxP qualification packages that document their infrastructure controls, but the pharmaceutical company remains responsible for validating the application layer. The shared responsibility model in cloud environments maps to validation as follows: the cloud provider qualifies infrastructure (physical security, hardware redundancy, network availability), the software vendor validates the application (functional testing, security testing, data integrity), and the pharmaceutical company validates the configuration and business processes (user acceptance testing, SOP alignment, training). Data residency and sovereignty requirements add complexity. Pharmaceutical data may be subject to regulations that restrict where it can be stored and processed. Validating a cloud-based system requires documenting which regions store data, how data flows between regions, and what controls prevent data from being processed in non-compliant jurisdictions. Our validation approach for cloud-based systems includes a cloud infrastructure qualification document (leveraging the provider’s GxP compliance packages), a shared responsibility matrix (documenting which controls each party owns), and application validation activities performed on the cloud-hosted system rather than in a separate environment. In our experience, this approach produces validation evidence that addresses both the traditional software validation requirements and the cloud-specific concerns that regulators increasingly ask about during inspections.