The category determines the validation effort GAMP software categories are the classification framework that determines how much validation effort a computerised system requires in a pharmaceutical environment. The system is simple: more complex and more configurable software requires more thorough validation. But the application of this system to modern software — particularly AI and machine learning — requires careful interpretation. The ISPE’s GAMP 5 framework defines four active software categories (Category 2 was removed in the current edition): Category definitions and validation requirements Category Name Description Validation approach Examples 1 Infrastructure software Software that provides the computing environment Qualification — verify installation and configuration Operating systems, databases, virtualisation platforms, network firmware 3 Non-configured products Software used as delivered, without configuration Verification of intended use, vendor documentation review Laboratory instruments with embedded firmware, standard calculators 4 Configured products Software configured for the specific application Configuration verification, functional testing of configured features ERP systems, LIMS, MES, SCADA platforms, CRM with workflow configuration 5 Custom applications Software developed specifically for the intended use Full lifecycle validation — requirements, design, coding, testing Custom manufacturing control systems, bespoke analytics applications The validation effort increases with category number. Category 1 systems require documented evidence that they are installed correctly and operate as expected — but no detailed functional testing. Category 5 systems require full lifecycle documentation: user requirements, functional specifications, design specifications, code review, unit testing, integration testing, and user acceptance testing. Where AI and ML systems fit Machine learning systems do not fit cleanly into traditional GAMP categories, and this is the most common classification mistake pharmaceutical companies make with AI. A computer vision model for pharmaceutical quality inspection: Uses a commercial ML framework (TensorFlow, PyTorch) — Category 1 infrastructure May use a pre-trained model architecture (ResNet, YOLO) — Category 3 or 4 depending on configuration Is trained on facility-specific data with custom training pipelines — Category 5 custom development Is deployed on configured inference hardware — Category 4 configured product The system spans multiple categories simultaneously. The GAMP 5 Second Edition addresses this by directing teams to classify based on the system’s overall risk to product quality rather than forcing each component into a single category. In practice, this means the training pipeline and model are classified as Category 5 (custom), while the underlying framework and infrastructure are classified at their respective lower categories. The detailed guidance on classifying and validating AI/ML under GAMP 5 covers the practical implications of this multi-category classification for validation strategy, change control, and ongoing compliance. What are the common classification mistakes? Classifying commercial ML platforms as Category 3: A pre-trained model used without modification may be Category 3. The same model fine-tuned on company data becomes Category 5 in the fine-tuning component. Treating all custom code as Category 5: A simple Python script that reformats CSV data is technically custom software, but its risk to product quality may not warrant full Category 5 validation. Apply critical thinking. Ignoring infrastructure classification: The GPU hardware, CUDA drivers, and container runtime that an ML model runs on are Category 1 infrastructure. They still require qualification — a model validated on one GPU configuration is not automatically valid on another. The practical decision Classification is not an academic exercise. It determines how much time, effort, and documentation the validation team must produce before the system can be used in production. Over-classification (treating everything as Category 5) wastes resources. Under-classification (treating custom AI models as Category 3) creates regulatory exposure. The answer is accurate classification based on system architecture, risk assessment, and the specific GAMP 5 guidance — followed by proportionate validation effort. How do you handle systems that span multiple GAMP categories? Modern pharmaceutical systems frequently combine components from multiple GAMP categories. An MES (Manufacturing Execution System) may include Category 3 infrastructure components (operating system, database), Category 4 configured software (the MES platform), and Category 5 custom components (site-specific business logic, integrations with other systems). Our validation approach for mixed-category systems: assess each component against its applicable category, but validate the system as an integrated whole. Component-level testing verifies individual functions. Integration testing verifies that components interact correctly. System-level testing (OQ, PQ) verifies end-to-end workflows that span multiple components. The risk assessment for mixed-category systems focuses on the interfaces between components, where failures are most likely. A misconfigured integration between the MES and the LIMS may result in incorrect test results being associated with the wrong batch — a high-impact failure that occurs at the interface rather than within either system individually. We document mixed-category systems using a system architecture diagram that maps each component to its GAMP category and identifies the interfaces between components. This diagram becomes a key input to the risk assessment and a reference document for change control — when a change is proposed to one component, the diagram shows which interfaces (and therefore which integration tests) may be affected.