The reference framework for pharma software validation The GAMP Guide — Good Automated Manufacturing Practice, published by the International Society for Pharmaceutical Engineering (ISPE) — is the industry-standard reference for validating computerised systems in pharmaceutical and regulated life sciences environments. Now in its Second Edition (published 2022), GAMP 5 provides a structured, risk-based approach to determining the appropriate validation activities for any computerised system operating within GxP scope. GAMP is not a regulation. It is a guidance framework that regulatory agencies — including the FDA, EMA, and MHRA — reference and expect companies to follow. An inspector will not cite GAMP 5 as a regulatory requirement, but they will expect validation approaches consistent with its principles. What the GAMP guide covers Topic area GAMP 5 guidance Practical use Risk-based approach Scale validation effort to system risk Avoid over-validating low-risk systems and under-validating high-risk ones Software categories Categories 1, 3, 4, 5 for classification Determine appropriate validation activities per system type V-Model lifecycle Requirements → Design → Build → Test → Release Structure validation documentation and traceability Supplier management Assess and leverage supplier quality activities Reduce redundant testing where supplier evidence is adequate Data integrity ALCOA+ principles applied to computerised systems Design audit trails, access controls, and data integrity checks Operational phase Change control, periodic reviews, incident management Maintain validated state throughout system operational life Retirement Decommissioning with data preservation Archive data and documentation per retention requirements Key changes in the Second Edition The 2022 Second Edition substantially updates the original 2008 GAMP 5 to address technology developments that the original framework did not anticipate: Critical thinking: The most significant philosophical change. The original GAMP 5 prescribed specific validation activities per software category. The Second Edition directs validation professionals to apply critical thinking — assessing what validation activities are appropriate based on system risk, complexity, and novelty rather than following prescriptive category-based protocols. AI and machine learning: New guidance on validating non-deterministic systems. Acknowledges that ML models cannot be validated using one-time testing approaches and introduces continuous validation as a concept — ongoing performance monitoring against predetermined acceptance criteria. Agile development: Recognises that pharmaceutical software is increasingly developed using iterative and agile methodologies. Provides guidance on maintaining validation compliance while using sprints, continuous integration, and incremental delivery. Cloud and SaaS: Addresses the validation implications of systems hosted by third-party cloud providers. Covers shared responsibility models, data residency, and supplier qualification for cloud infrastructure. The decision between traditional validation approaches and these updated methods is a risk-based choice that CSA and GAMP 5 now align on. How to apply GAMP in practice? The guide is structured for reference, not sequential reading. A validation team typically uses it as follows: System assessment: Classify the system using GAMP software categories. Determine GxP relevance and risk level. Validation strategy: Define the validation approach — traditional V-model, agile, or hybrid — based on system category, risk, and development methodology. Activity selection: Choose specific validation activities (documentation, testing, reviews) proportionate to risk. High-risk systems get thorough testing. Low-risk systems get targeted verification. Supplier engagement: Assess supplier quality capabilities. Leverage supplier testing evidence where adequate. Supplement with user testing where supplier evidence is insufficient. Lifecycle management: Establish change control, periodic review, and incident management processes that maintain the validated state during the operational phase. The guide’s value is not in following it prescriptively — the Second Edition explicitly discourages this. Its value is in providing a defensible framework that regulatory inspectors recognise and accept. How do you apply GAMP to modern agile development practices? GAMP’s validation lifecycle was designed around waterfall development: requirements first, then design, then build, then test, then deploy. Agile development iterates through these phases repeatedly. Reconciling GAMP with agile requires adapting the documentation approach without compromising the regulatory intent. The adaptation: maintain living requirements and design documents that evolve with each sprint, but snapshot them at release points for regulatory traceability. Each release candidate is validated against the requirements document as it stands at that point. The traceability matrix reflects the current state of requirements-to-tests at each release. Sprint-level testing replaces the monolithic OQ/PQ execution. Each sprint produces tested functionality with documented evidence. The release validation aggregates sprint test results and supplements them with integration and regression testing. The validation report for each release references the sprint test evidence rather than re-executing all tests. This approach we use maintains full traceability (every requirement has a linked test with documented evidence) while supporting iterative development. The regulatory requirement is not waterfall development — it is documented evidence that the system meets its requirements. Agile can produce this evidence as effectively as waterfall, provided the documentation discipline is maintained. We have deployed this agile-GAMP hybrid approach on four pharma software projects. The key enabler is automated testing: unit tests and integration tests that execute automatically with every code change provide continuous verification evidence. Manual validation activities (UAT, business process testing) are reserved for functionality that cannot be automatically tested.