How to Increase GPU Performance for AI: Batch Sizing, Occupancy, and Operator Fusion

How to increase GPU utilization for AI workloads: batch sizing, kernel occupancy, memory coalescing, operator fusion, and a profiling-first approach.

How to Increase GPU Performance for AI: Batch Sizing, Occupancy, and Operator Fusion
Written by TechnoLynx Published on 08 May 2026

Increasing GPU performance for AI workloads is not primarily about changing hardware — it’s about using the hardware you have more effectively. In our experience, most production AI inference systems operate at 30–60% GPU utilization when first deployed. Getting to 80–90% is almost always an engineering problem, not a budget problem.

The techniques that actually move the needle, in rough order of impact, are: batch sizing, operator fusion, memory access optimization, and kernel occupancy tuning. Each addresses a different constraint. Applying the wrong fix for the bottleneck produces no improvement.

Profile First

This cannot be overstated. The approaches below target different bottlenecks — memory bandwidth, compute throughput, launch overhead, CPU synchronization. Without profiling, you don’t know which one limits your workload. The profiling workflow using Nsight Systems and Nsight Compute is covered in GPU Accelerating RF Signal Propagation Simulation, and the same methodology applies to any AI workload.

The one-line decision: run Nsight Systems and confirm GPU utilization, then check if the idle periods are compute gaps, memory transfer stalls, or CPU-side overhead.

What does this mean in practice?

For most AI inference workloads, increasing batch size is the single highest-impact change for throughput. GPU hardware is designed for massively parallel execution. A batch of 1 leaves the vast majority of compute units idle. A batch of 32 amortizes kernel launch overhead and fills more of the available warp slots.

The relationship between batch size and throughput follows a curve:

  • Batch 1–4: Typically memory-bandwidth-limited, low arithmetic intensity. Most of the GPU is idle.
  • Batch 8–32: Throughput increases near-linearly for many models. This is the efficient operating region for many inference scenarios.
  • Batch 64–256: Compute-bound for most transformer models. Throughput increase slows as arithmetic intensity exceeds the memory bandwidth roof.
  • Batch >256: Typically memory-bound again due to KV cache growth in LLMs; return to compute-bound for CNN architectures.

The constraint is latency: higher batch size increases time-to-first-response. For latency-sensitive APIs (p99 < 100ms SLA), practical batch sizes are limited. For throughput-optimized offline inference, batch size should be pushed to the memory limit.

How to find optimal batch size:

import time
import torch

model = model.cuda().eval()
for batch_size in [1, 2, 4, 8, 16, 32, 64, 128]:
    x = torch.randn(batch_size, *input_shape).cuda()
    # Warmup
    for _ in range(5):
        _ = model(x)
    torch.cuda.synchronize()
    t0 = time.perf_counter()
    for _ in range(50):
        _ = model(x)
    torch.cuda.synchronize()
    elapsed = (time.perf_counter() - t0) / 50
    print(f"Batch {batch_size}: {elapsed*1000:.1f}ms, {batch_size/elapsed:.0f} samples/s")

Operator Fusion

Unfused inference executes each operation as a separate GPU kernel: linear projection, activation, another linear projection, layer norm — each reads from and writes back to HBM. Fused kernels chain these operations, keeping intermediate results in registers or shared memory and eliminating multiple HBM round-trips.

Concrete fusion opportunities for transformer inference:

Unfused Operations Fused Version Benefit
Q,K,V projection → attention → softmax → weighted sum FlashAttention 2–4x attention kernel speedup
LayerNorm → linear projection Fused kernel (custom or Triton) 1.3–1.8x
Element-wise activation + gate multiply (SwiGLU) Fused kernel 1.5–2x for this operation
Residual add + LayerNorm apex FusedLayerNorm, or torch.compile 1.2–1.5x

torch.compile with mode="reduce-overhead" or mode="max-autotune" performs automatic fusion using the inductor backend. This is the first thing to try before writing custom fused kernels:

model = torch.compile(model, mode="max-autotune")

In our experience, torch.compile delivers 15–40% throughput improvement on modern transformer architectures with no code changes beyond this single line.

Kernel Occupancy

Occupancy is the ratio of active warps to the maximum number of warps an SM can support. Low occupancy means the SM has idle cycles it cannot fill due to resource constraints (registers, shared memory, or block configuration).

Check occupancy with Nsight Compute: look at the “Achieved Occupancy” metric and compare it to the theoretical maximum. If achieved occupancy is below 50%, investigate:

  • Register pressure: Too many registers per thread limits how many threads can reside on an SM simultaneously. Compile with -maxrregcount=64 to cap registers and check if spilling occurs.
  • Shared memory per block: Large shared memory allocations limit concurrent blocks per SM. Check with --ptxas-options=-v.
  • Block size: Very small block sizes (e.g., 32 threads) waste scheduler slots. 128–256 threads per block is a common starting point.

Occupancy is not always the binding constraint — a memory-bound kernel at 50% occupancy may already be saturating HBM bandwidth. Increasing occupancy improves throughput only for compute-bound kernels with insufficient warps to hide latency.

Memory Coalescing

For custom kernels or cases where profiling shows low memory throughput, check memory access coalescing. A coalesced access is when 32 consecutive threads (a warp) access 32 consecutive memory addresses — the GPU satisfies this with a single memory transaction.

Signs of uncoalesced access:

  • Nsight Compute shows L1/TEX Cache hit rate near 0% but L2 Cache hit rate also low
  • Memory throughput percentage far below bandwidth ceiling despite memory-bound classification
  • Global Memory Load Efficiency metric below 50%

For row-major matrix access in column-major traversal order, or transpose operations without shared memory tiling, coalescing problems are common. The fix is to reorganize data layout or use shared memory as a staging buffer with coalesced loads.

Asynchronous Data Loading

A common bottleneck that profiling reveals but developers overlook: the GPU is idle because the next batch isn’t ready yet. The CPU is busy preprocessing or loading data while the GPU waits.

Fix with CUDA streams and prefetching:

# PyTorch DataLoader with pinned memory enables async H2D transfer
loader = DataLoader(
    dataset,
    batch_size=32,
    num_workers=4,
    pin_memory=True,
    prefetch_factor=2
)

pin_memory=True allocates host memory in pinned (non-pageable) memory, enabling faster DMA transfers. prefetch_factor=2 pre-loads the next batch while the GPU processes the current one.

Performance Improvement Checklist

  • Profile with Nsight Systems — confirm GPU utilization and identify idle gaps
  • Increase batch size to the maximum allowed by latency SLA and VRAM
  • Apply torch.compile(model, mode="max-autotune") as the first code change
  • Enable pinned memory and prefetching in the data pipeline
  • Check for synchronous CUDA operations blocking CPU (.item(), .numpy() on GPU tensors)
  • Profile specific slow kernels with Nsight Compute — check occupancy and memory efficiency
  • For custom kernels: verify memory coalescing with Global Memory Load Efficiency metric
  • Consider operator fusion for repeated sequences of element-wise operations

In brief

GPU performance improvement for AI starts with batch sizing (highest leverage, zero kernel work), proceeds through torch.compile-based operator fusion (low effort, significant gain), and then addresses specific kernel bottlenecks identified by profiling. Occupancy tuning and memory coalescing are meaningful only for compute-bound kernels where the profiling data confirms those are the binding constraints. Profiling before every optimization is not optional — it’s how you avoid spending a week optimizing the wrong kernel.

CPU GPU Comparison for System Benchmarking: Where the Metrics Differ

CPU GPU Comparison for System Benchmarking: Where the Metrics Differ

8/05/2026

CPU and GPU benchmarks measure fundamentally different things. Why comparing CPU and GPU scores directly is misleading and what system-level AI benchmarks.

CPU vs GPU Comparison for AI: Why the Question Is Usually Misdirected

CPU vs GPU Comparison for AI: Why the Question Is Usually Misdirected

8/05/2026

CPU vs GPU for AI is a false binary. The right question is which operations run where and why. Memory bandwidth and parallelism determine the answer.

GPU Profiler Tools and Workflow: NSight, Nsight Systems, and Nsight Compute

GPU Profiler Tools and Workflow: NSight, Nsight Systems, and Nsight Compute

8/05/2026

A practical guide to GPU profiler tools — NSight Systems vs Nsight Compute — and how to interpret profiling data to find real bottlenecks.

Best NVIDIA Driver for RTX 3090 and AI Workloads: Selection Criteria

Best NVIDIA Driver for RTX 3090 and AI Workloads: Selection Criteria

8/05/2026

The best NVIDIA driver for AI workloads is the latest production branch that supports your required CUDA version. How to select and what to avoid.

GPU Performance Settings for AI: Persistence Mode, Power Limits, MIG, and NUMA Pinning

GPU Performance Settings for AI: Persistence Mode, Power Limits, MIG, and NUMA Pinning

8/05/2026

GPU performance settings that materially affect AI workloads: persistence mode, power limits, MIG configuration, clock settings, and NUMA pinning.

How to Benchmark Your PC for AI: The Steady-State Test Protocol

How to Benchmark Your PC for AI: The Steady-State Test Protocol

8/05/2026

Benchmarking a PC for AI capacity planning requires measuring steady-state performance, not burst peaks. The protocol for measuring sustained AI.

Edge AI Applications: Deployment Tradeoffs for Autonomous Systems and Industrial Use Cases

Edge AI Applications: Deployment Tradeoffs for Autonomous Systems and Industrial Use Cases

7/05/2026

Edge AI applications in autonomous vehicles, industrial inspection, and smart cameras — deployment tradeoffs for model size, latency, and connectivity.

NVIDIA vs AMD GPU Performance: Why Software Stack Matters More Than Spec Sheets

NVIDIA vs AMD GPU Performance: Why Software Stack Matters More Than Spec Sheets

7/05/2026

NVIDIA's AI lead is primarily a software ecosystem advantage. Why hardware specs alone can't predict GPU performance when comparing NVIDIA and AMD.

Data Center GPU for AI Workloads: Own vs Rent, TCO, and NVLink Architecture

Data Center GPU for AI Workloads: Own vs Rent, TCO, and NVLink Architecture

7/05/2026

Data center GPUs vs cloud GPU rentals for AI workloads: TCO analysis, NVLink multi-GPU, and when owning hardware beats renting it.

How to Benchmark Your PC for AI: A Methodology That Goes Beyond Single Scores

How to Benchmark Your PC for AI: A Methodology That Goes Beyond Single Scores

7/05/2026

The three dimensions of meaningful AI benchmarking and why leaderboard numbers don't predict your performance. A practical AI benchmarking methodology.

CUDA vs OpenCL Performance Comparison: Portability, Optimization, and When to Choose Each

CUDA vs OpenCL Performance Comparison: Portability, Optimization, and When to Choose Each

7/05/2026

CUDA vs OpenCL: performance tradeoffs, portability constraints, and a practical decision framework for GPU compute API selection.

AI TOPS and GPU Utilization: When TOPS Is the Wrong Metric for Your Workload

AI TOPS and GPU Utilization: When TOPS Is the Wrong Metric for Your Workload

7/05/2026

TOPS and GPU utilization both mislead AI capacity planning. When to measure compute vs memory bandwidth vs throughput, and how to pick the right metric.

AI Benchmark Testing: What Makes a Benchmark Meaningful

7/05/2026

A meaningful AI benchmark tests what your workload actually does. The gap between standardized tests and production performance, and how to close it.

AMD vs NVIDIA for AI Inference: When the Cost-Per-Inference Calculus Shifts

6/05/2026

When AMD beats NVIDIA on inference cost-per-dollar and when NVIDIA's TensorRT advantage reverses the equation.

CUDA Kernel Explained: Thread Hierarchy, Execution, and When to Write Your Own

6/05/2026

What a CUDA kernel is, how threads and blocks map to GPU hardware, and when custom kernels beat library calls.

GPU Stress Testing for AI: What Sustained Load Reveals That Benchmarks Hide

6/05/2026

GPUs scoring identically on short benchmarks can differ by 15-30% under sustained load. How stress testing exposes the limits that benchmarks miss.

CUDA GPU Architecture and Programming: What Makes a GPU CUDA-Capable

6/05/2026

What makes a GPU CUDA-capable, how CUDA compute capability tiers work, and what the architecture enables for parallel compute workloads.

GPU Benchmark Software for AI: What Each Tool Measures and What It Misses

6/05/2026

Consumer benchmarks measure the wrong thing for AI. AI benchmarks test the wrong workloads. What each GPU benchmark tool measures and what to use instead.

How to Check TensorFlow GPU Detection and Diagnose Common Failures

6/05/2026

How to verify TensorFlow GPU detection with tf.config.list_physical_devices, diagnose CUDA version mismatches, driver issues, and common failure modes.

Benchmark Testing: What It Measures, What It Misses, and How to Do It Right for AI

6/05/2026

Benchmark scores and real AI performance differ by 20-50%. How to test in a way that predicts actual workload behaviour rather than lab conditions.

AMD vs Intel for AI: Why Spec-Sheet Comparisons Mislead and What to Measure Instead

6/05/2026

AMD vs Intel CPU performance for AI workloads varies by up to 3x depending on model architecture and software stack. No single 'better' answer exists.

AI Inference Infrastructure: Best Practices That Go Beyond Vendor Benchmark Claims

5/05/2026

Inference infrastructure decisions should be driven by measured performance under your actual workload — vendor benchmarks rarely match production conditions.

Tensor Parallelism vs Pipeline Parallelism: Choosing the Right Strategy for Your GPU Cluster

5/05/2026

Tensor parallelism splits operations across GPUs (low latency, high bandwidth need). Pipeline parallelism splits layers (tolerates lower bandwidth, adds bubble overhead).

Choosing Efficient AI Inference Infrastructure: What to Measure Beyond Raw GPU Speed

5/05/2026

Inference efficiency is performance-per-watt and cost-per-inference, not raw FLOPS. Batch size, precision, and memory bandwidth determine throughput.

CUDA Cores vs Tensor Cores: What Actually Determines AI Performance

5/05/2026

AI inference throughput depends primarily on tensor core utilisation, not CUDA core count. Tensor core generation determines supported precision formats.

CUDA Compute Capability Explained: What the Version Number Means for AI Workloads

5/05/2026

CUDA compute capability determines which tensor core operations and precision formats a GPU supports — not just whether CUDA runs.

How to Improve GPU Performance: A Profiling-First Approach to Compute Optimization

5/05/2026

Profiling must precede GPU optimisation. Memory bandwidth fixes typically deliver 2–5× more impact than compute-bound fixes for AI workloads.

BF16 vs FP16: When Dynamic Range Beats Precision and Vice Versa

5/05/2026

BF16 trades mantissa precision for dynamic range. The choice depends on whether your workload is gradient-dominated or activation-precision-dominated.

GPU Parallel Computing Explained: How Thousands of Cores Solve Problems Differently

5/05/2026

GPU parallelism exploits thousands of simple cores for data-parallel workloads. The execution model differs fundamentally from CPU thread-level parallelism.

AI TOPS Explained: Why This Popular Spec Tells You Almost Nothing About Real Performance

4/05/2026

TOPS measures theoretical throughput at one precision. It ignores memory bandwidth, software overhead, and workload fit — making it a poor performance predictor.

A100 GPU Rental Options: What Availability and Pricing Look Like in 2026

4/05/2026

A100 rental pricing varies 2–5× between providers depending on commitment length, region, and availability. Here is what the market looks like.

Agent Framework Selection for Edge-Constrained Inference Targets

2/05/2026

Selecting an agent framework for partial on-device inference: four axes that decide whether a desktop-class framework survives the edge-target boundary.

Distillation vs Quantisation for Multi-Platform Edge Inference: How to Choose

28/04/2026

Distillation and quantisation both shrink models for edge inference, but for three-or-more platforms only distillation keeps quality consistent.

GPU-Accelerating RF Signal Propagation Simulation: From Days to Hours

28/04/2026

Naive GPU porting of sequential RF simulation delivers modest gains. Algorithmic redesign to expose parallelism turns multi-day runtimes into hours.

What Cross-Platform GPU Performance Portability Requires

26/04/2026

Source-level portability is not performance portability. Competitive speed across GPU vendors needs architecture-aware abstraction and per-target tuning.

Cloud GPU vs On-Premise AI Accelerators: A Total Cost Analysis

25/04/2026

Cloud GPU suits variable, short-term workloads. On-premise is cheaper for sustained utilisation above 60%. The break-even is calculable, not philosophical.

How to Optimise AI Inference Latency on GPU Infrastructure

24/04/2026

Inference latency optimisation targets model compilation, batching, and memory management — not hardware speed. TensorRT and quantisation are key levers.

Algorithmic Restructuring vs Kernel Tuning: Choosing the Higher-Leverage GPU Optimisation

23/04/2026

Kernel tuning improves constant factors. Algorithmic restructuring changes complexity class. Identify your bottleneck type before committing effort.

How to Profile GPU Kernels to Find the Real Bottleneck

22/04/2026

GPU profiling separates compute-bound from memory-bound kernels. Nsight Compute roofline analysis shows where a kernel sits and what would move it.

The Hidden Cost of GPU Underutilisation

21/04/2026

Most GPU workloads use 30–50% of available compute. Without profiling, the waste is invisible. Bandwidth, occupancy, and serialisation are the root causes.

CUDA vs OpenCL vs SYCL: Choosing a GPU Compute API

20/04/2026

CUDA delivers the deepest optimisation on NVIDIA hardware. OpenCL and SYCL offer portability. Choose based on lock-in tolerance and performance needs.

GPU Performance Per Dollar — Why Cost, Efficiency, and Value Are Not the Same Metric

17/04/2026

Performance per dollar. Tokens per watt. Cost per request. These sound like the same thing said differently, but they measure genuinely different dimensions of AI infrastructure economics. Conflating them leads to infrastructure decisions that optimize for the wrong objective.

Precision Is an Economic Lever in Inference Systems

17/04/2026

Precision isn't just a numerical setting — it's an economic one. Choosing FP8 over BF16, or INT8 over FP16, changes throughput, latency, memory footprint, and power draw simultaneously. For inference at scale, these changes compound into significant cost differences.

Precision Choices Are Constrained by Hardware Architecture

17/04/2026

You can't run FP8 inference on hardware that doesn't have FP8 tensor cores. Precision format decisions are conditional on the accelerator's architecture — its tensor core generation, native format support, and the efficiency penalties for unsupported formats.

Steady-State Performance, Cost, and Capacity Planning

17/04/2026

Capacity planning built on peak performance numbers over-provisions or under-delivers. Real infrastructure sizing requires steady-state throughput — the predictable, sustained output the system actually delivers over hours and days, not the number it hit in the first five minutes.

Why Benchmarks Mislead AI Hardware Procurement — and How to Use Them Correctly

16/04/2026

A benchmark result starts with full context — workload, software stack, measurement conditions. By the time it reaches a procurement deck, all that context is gone. The failure mode is not wrong benchmarks but context loss during propagation.

Building an Audit Trail: Benchmarks as Evidence for Governance and Risk

16/04/2026

High-value AI hardware decisions need traceable evidence, not slide-deck bullet points. When benchmarks are documented with methodology, assumptions, and limitations, they become auditable institutional evidence — defensible under scrutiny and revisitable when conditions change.

The Comparability Protocol: Why Benchmark Methodology Defines What You Can Compare

16/04/2026

Two benchmark scores can only be compared if they share a declared methodology — the same workload, precision, measurement protocol, and reporting conditions. Without that contract, the comparison is arithmetic on numbers of unknown provenance.

Back See Blogs
arrow icon