Your GPUs are probably half idle
An NVIDIA A100 provides 312 TFLOPS of FP16 compute. An H100 provides 989 TFLOPS. These are the numbers on the data sheet. The numbers in production are different. Typical AI training workloads achieve 30–50% of theoretical peak throughput. Inference workloads often achieve less. The gap between the hardware’s capability and the software’s utilisation of that capability represents wasted compute budget — compute that was purchased (or rented) but not used.
This underutilisation is not a hardware deficiency. It is a software architecture problem — specifically, a mismatch between how the workload is structured and how the GPU hardware executes work. The GPU provides massive parallelism (thousands of cores), high-bandwidth memory (2–3 TB/s on modern architectures), and specialised compute units (tensor cores). Exploiting this hardware requires that the workload is structured to saturate these resources simultaneously. When it is not — when the workload serialises operations that could be parallel, when memory access patterns waste bandwidth, or when the kernel launch overhead dominates the compute time — the GPU sits partially idle while the wall-clock time extends beyond what the hardware could deliver.
Where does the GPU utilisation go?
GPU underutilisation has specific, diagnosable causes. Identifying which cause dominates a specific workload is the first step toward recovering the wasted compute.
Memory bandwidth bottleneck. The GPU’s compute throughput exceeds its memory bandwidth by a ratio that has grown with each hardware generation. An A100 provides 2 TB/s of HBM2e bandwidth against 312 TFLOPS of FP16 compute — meaning a kernel that reads one FP16 value per operation is memory-bandwidth-bound, not compute-bound. Many common operations (element-wise operations, batch normalisation, activation functions, small matrix multiplications) fall into this category. The compute units are idle waiting for data from memory, regardless of how many compute units are available.
The symptom is measurable: Nsight Compute’s roofline analysis shows the kernel operating well below the compute roofline but near the memory bandwidth ceiling. The fix depends on the operation: kernel fusion (combining multiple bandwidth-bound operations into a single kernel that reuses data in registers or shared memory), data layout optimisation (ensuring coalesced memory access patterns), and mixed-precision computation (using FP16 or INT8 to halve or quarter the memory bandwidth requirement per operation).
Low occupancy. GPU occupancy measures the fraction of the GPU’s available warps (groups of 32 threads) that are active simultaneously. Low occupancy means the GPU cannot hide memory latency through warp switching — when one warp stalls on a memory access, there are not enough other warps ready to execute, and the compute units sit idle. Common causes: excessive register usage per thread (reducing the number of threads that fit in a streaming multiprocessor), excessive shared memory usage per thread block (reducing the number of blocks that can co-reside), or grid dimensions that do not saturate the GPU’s streaming multiprocessors.
The diagnostic is direct: Nsight Compute reports achieved occupancy and the limiting factor (registers, shared memory, or block size). The fix requires adjusting the kernel’s resource usage — reducing register pressure through algorithmic restructuring, reducing shared memory usage through tiling strategies, or increasing the grid size to provide more concurrent blocks.
Host-device serialisation. Every kernel launch, memory transfer, and synchronisation point between the CPU host and the GPU device creates a serialisation boundary where the GPU may be idle waiting for the host. In workloads with many small operations — such as model inference with complex branching logic, or training loops with frequent metric computation on the CPU — the cumulative host-device overhead can dominate the execution time. We have profiled inference pipelines where the GPU spent more time idle between kernel launches than it spent executing kernels.
The fix is to reduce the number of host-device boundaries: batching operations into larger kernels, using CUDA graphs to replay a sequence of operations without per-launch overhead, moving decision logic from the CPU to the GPU where possible, and overlapping data transfer with computation using CUDA streams.
Inefficient kernel implementations. Custom CUDA kernels that do not exploit the hardware’s memory hierarchy, warp-level primitives, or tensor cores leave performance on the table. A naive matrix multiplication that does not use shared memory tiling achieves 5–10% of the performance of a cuBLAS implementation. A convolution kernel that does not use tensor cores on Volta+ architectures achieves a fraction of the hardware’s potential throughput.
The three reasons GPUs don’t work out often trace back to these utilisation failures — the hardware was adequate, but the software architecture did not exploit it.
Why profiling is not optional
The causes of GPU underutilisation are not visible from wall-clock timing alone. A training loop that takes 10 hours does not indicate whether the GPU was 90% utilised (near-optimal; further gains require algorithmic changes or hardware upgrades) or 30% utilised (significant room for improvement through software optimisation). The only way to distinguish these cases is profiling.
NVIDIA’s Nsight Systems provides timeline-level profiling: when the GPU was active, when it was idle, where the host-device serialisation points are, and how kernel launches overlap with data transfers. Nsight Compute provides kernel-level profiling: roofline analysis, occupancy analysis, memory throughput analysis, and warp-level execution statistics. Together, they provide a complete picture of where the utilisation is lost and what intervention would recover it.
The investment in profiling is small relative to the compute cost it can recover. A workload running on 8× A100 GPUs at a cloud cost of £25/hour is spending £200/hour. A profiling session that identifies a 2× throughput improvement halves the compute cost for the lifetime of the workload — the profiling ROI is measured in days, not months.
The organisational pattern we see repeatedly
The pattern we encounter in our GPU Performance Audit engagements is consistent: the team has a workload running on GPU infrastructure, the workload is “too slow” (training takes too long, inference latency is too high, throughput does not meet the production requirement), and the proposed solution is more GPUs or better GPUs. Profiling reveals that the existing GPUs are 30–50% utilised, and the performance gap can be closed through software optimisation rather than hardware procurement.
The optimisation path is systematic: profile → identify the dominant bottleneck (memory bandwidth, occupancy, serialisation, kernel efficiency) → apply the targeted fix → profile again → move to the next bottleneck. Each iteration recovers utilisation that translates directly to throughput improvement and compute cost reduction.
If your GPU workloads are not achieving the throughput you expected from the hardware — and the diagnosis has not included systematic profiling — a GPU Performance Audit identifies the specific utilisation gaps and the interventions that close them. Our GPU engineering practice starts with the profiling data, not the hardware upgrade proposal.