Part of Series NVIDIA Dynamo & llm-d 27 of 30
1 NVIDIA Dynamo: KV-Aware Routing and the Inference Operating System for GPU Clusters 2 NVIDIA Dynamo Part 2: ModelExpress, NIXL, and Zero-Instruction Cold Starts 3 NVIDIA Dynamo Part 3: The Planner, Grove Operator, and Gang Scheduling on NVL72 4 NVIDIA Dynamo Part 4: KVBM — Multi-Tier KV Cache Offloading Across GPU, CPU, SSD, and Remote 5 llm-d: Declarative Inference Configuration — From YAML to Optimized GPU Execution 6 Dynamo Fault Tolerance: Canary Health Checks, Request Migration, and Graceful Degradation 7 Dynamo Multi-Model Serving: GPU Sharing, Model Priority, and Adapter Pool Management 8 Dynamo for Multimodal: Video/Audio Routing and Encoder Scheduling 9 Dynamo Cost Optimizer: Spot Instances, Reserved Capacity, and Burst Strategy 10 Dynamo on Blackwell: GB200 NVL72 Architecture and Inference Integration 11 Dynamo Observability: Distributed Tracing, Metrics, and Latency Alerting 12 Dynamo vs SGLang Router: Architectural Comparison and Integration Patterns 13 Dynamo for MoE: Expert-Aware Routing and Expert Parallelism Integration 14 Building a Mini-Dynamo: A 500-Line Python KV-Aware Router 15 Dynamo Request Lifecycle: End-to-End Trace from HTTP to GPU Kernel with Latency Breakdown 16 Dynamo Capacity Planning: How Many GPUs for Your SLO, Traffic Pattern, and Model Size 17 Migrating from Single-Node vLLM to Dynamo: A Step-by-Step Production Guide 18 Dynamo Security and Isolation: Multi-Tenant Serving, Request Isolation, and Data Privacy 19 Dynamo A/B Testing and Canary Deployments: Safe Model Updates Without Downtime 20 Dynamo Production Monitoring: Grafana Dashboards, Alert Playbooks, and On-Call Guide 21 Dynamo Network Optimization: InfiniBand Tuning, NCCL Parameters, and Cross-Rack Communication 22 Dynamo for Edge: Extending Cluster Orchestration to On-Premise and Hybrid Deployments 23 Dynamo Batch Inference: Offline Processing and Maximum Throughput 24 Dynamo Speculative Decoding: Draft-Target Coordination Across a Cluster 25 Dynamo Model Versioning: Blue-Green Deployment and Safe Rollback 26 Dynamo GPU Health: DCGM Integration and Predictive Maintenance 27 Load Testing Dynamo: Finding Your Cluster's Breaking Point 28 Dynamo Multi-Tenant Isolation: Ensuring Data Privacy Across Shared GPU Clusters 29 Dynamo Cost-Per-Token Optimization: Minimizing Serving Cost While Meeting SLOs 30 Dynamo Roadmap: What's Coming in 2026 — CXL Integration, NVLink Switch, and Beyond

Online Llama 70B serving on 8×H100 achieves 42 tokens/sec with P99 TTFT under 500ms. Reconfiguring for batch inference—max batch size from 128 to 2048 sequences, disabling streaming, eliminating scheduler preemption—pushes throughput to 127 tokens/sec with TTFT of 8.4 seconds (nobody cares for overnight batch jobs). That’s 3x throughput on identical hardware. The cost drops from 0.016per1Ktokens(online)to0.016 per 1K tokens (online) to 0.0054 (batch). For dataset processing, translation pipelines, or bulk summarization, batch mode cuts your bill by 66%. This post covers the Dynamo configuration changes and throughput analysis.

Online vs Offline Execution Modes

The fundamental difference is in scheduling priority:

# Online mode (default):
# - Minimize TTFT: process prefills immediately
# - Minimize TPOT: keep decode batch small enough for low latency
# - Preempt running sequences for new high-priority requests
# - Stream tokens as they're generated

# Offline mode:
# - Maximize tokens/second: pack as many sequences as possible
# - Process entire dataset: no new requests arrive during execution
# - No streaming: collect all outputs at the end
# - No preemption: every sequence runs to completion

Dynamo’s offline configuration:

# dynamo_batch_config.yaml
mode: offline
scheduler:
  max_batch_size: 512
  max_tokens_per_batch: 65536
  prefill_chunk_size: 8192
  enable_chunked_prefill: true
  preemption_mode: none  # No preemption in offline mode
  priority_policy: fifo  # Simple FIFO, no priority classes

router:
  enabled: false  # No routing needed for batch

worker:
  streaming: false
  output_buffer_size: 1048576  # Large output buffer (1M entries)

memory:
  gpu_memory_utilization: 0.95  # Aggressive — no need for headroom
  swap_space_gb: 128  # Use CPU swap freely
ℹ️ Note

In offline mode, gpu_memory_utilization can be set to 0.95 because there are no bursty prefill requests that could cause transient memory spikes. The entire workload is known in advance, so memory allocation is predictable.

Batch Size Optimization

The optimal batch size for offline processing is larger than online serving because latency constraints are removed:

def compute_optimal_batch_size(
    model_params: dict,
    gpu_memory_gb: float,
    gpu_memory_utilization: float,
    avg_seq_len: int,
    block_size: int = 16
) -> int:
    """Compute maximum concurrent sequences for offline mode."""
    model_memory = model_params["model_size_gb"]
    available = gpu_memory_gb * gpu_memory_utilization - model_memory

    # KV cache per block (Llama 70B example)
    kv_heads = model_params["kv_heads"]
    head_dim = model_params["head_dim"]
    num_layers = model_params["num_layers"]
    dtype_bytes = model_params["dtype_bytes"]

    bytes_per_block = (
        block_size * kv_heads * head_dim * 2 * dtype_bytes * num_layers
    )
    total_blocks = int(available * 1e9 / bytes_per_block)

    blocks_per_seq = avg_seq_len // block_size
    max_seqs = total_blocks // blocks_per_seq

    return max_seqs
📊

Maximum Batch Size by Model and GPU — Offline Mode (0.95 util)

ModelGPUModel Mem (GB)KV Budget (GB)Avg Seq 2KAvg Seq 4K
Llama 7B FP16 A100-80GB 14 62 1,984 992
Llama 7B INT4 A100-80GB 4 72 2,304 1,152
Llama 70B FP16 TP=4 4xA100 35/GPU 148 total 507 253
Llama 70B INT4 TP=4 4xA100 9/GPU 252 total 864 432
Llama 70B FP8 TP=2 2xH100 18/GPU 126 total 432 216

The Prefill-Decode Scheduling Strategy

In offline mode, all requests are available upfront. The scheduler can choose between two strategies:

Strategy 1: Prefill-First

Process all prefills before starting any decoding. Maximizes GPU utilization during prefill because prefill is compute-bound and benefits from large batch GEMMs:

class PrefillFirstScheduler:
    def __init__(self, max_prefill_batch: int, max_decode_batch: int):
        self.max_prefill_batch = max_prefill_batch
        self.max_decode_batch = max_decode_batch

    def schedule(self, waiting: list, running: list) -> dict:
        if waiting and len(running) == 0:
            # Phase 1: Prefill everything
            prefill_batch = waiting[:self.max_prefill_batch]
            return {"prefill": prefill_batch, "decode": []}
        elif running:
            # Phase 2: Decode everything
            decode_batch = running[:self.max_decode_batch]
            return {"prefill": [], "decode": decode_batch}
        else:
            return {"prefill": [], "decode": []}

Problem: this leaves GPUs idle between prefill and decode phases, and decode batches shrink as sequences finish.

Strategy 2: Continuous Batching (Adapted)

Interleave prefill and decode, replacing finished sequences with new prefills immediately:

class ContinuousBatchScheduler:
    def __init__(self, max_tokens: int):
        self.max_tokens = max_tokens

    def schedule(self, waiting: list, running: list) -> dict:
        decode_tokens = len(running)  # 1 token per running seq
        remaining_budget = self.max_tokens - decode_tokens

        # Fill remaining budget with prefills
        prefill_batch = []
        prefill_tokens = 0
        for req in waiting:
            if prefill_tokens + req.input_len <= remaining_budget:
                prefill_batch.append(req)
                prefill_tokens += req.input_len
            else:
                break

        return {
            "prefill": prefill_batch,
            "decode": running,
            "total_tokens": decode_tokens + prefill_tokens
        }

Continuous batching is always better for offline throughput because it keeps the GPU busy:

📊

Prefill-First vs Continuous Batching — Llama 70B, 4xA100, 10K Requests

StrategyTotal Time (s)Throughput (tok/s)GPU Util AvgIdle Time (s)
Prefill-First 482 4,150 72% 135
Continuous Batch 328 6,098 91% 12
Continuous + Chunked 312 6,410 93% 8

Disabling Streaming Overhead

In online mode, each generated token triggers output processing, detokenization, and network I/O. In batch mode, these are unnecessary until all tokens are generated:

class BatchOutputCollector:
    def __init__(self):
        self.outputs = {}  # request_id -> list of token_ids

    def collect(self, request_id: str, token_id: int,
                finished: bool) -> None:
        """Collect tokens without processing."""
        if request_id not in self.outputs:
            self.outputs[request_id] = []
        self.outputs[request_id].append(token_id)

    def finalize_all(self, tokenizer) -> dict:
        """Detokenize everything at the end."""
        results = {}
        for request_id, token_ids in self.outputs.items():
            results[request_id] = tokenizer.decode(
                token_ids, skip_special_tokens=True
            )
        return results

The throughput gain from disabling streaming:

# Per-token overhead in streaming mode:
# - Detokenization: 0.02 ms
# - JSON serialization: 0.03 ms
# - SSE write: 0.02 ms
# - Total: 0.07 ms per token

# For batch of 256 sequences over 500 output tokens each:
# Streaming overhead: 256 * 500 * 0.07 ms = 8,960 ms = 8.96 s
# Batch detokenize: 256 * 500 * 0.001 ms = 0.128 s (bulk decode)
# Savings: 8.83 s over the course of generation

Dataset Processing Pipeline

A complete offline batch processing pipeline:

import json
from typing import Iterator

class BatchProcessor:
    def __init__(self, dynamo_client, config):
        self.client = dynamo_client
        self.batch_size = config["batch_size"]
        self.max_output_tokens = config["max_output_tokens"]

    def process_dataset(self, input_path: str,
                        output_path: str) -> dict:
        """Process an entire dataset through the model."""
        # Load dataset
        with open(input_path) as f:
            dataset = [json.loads(line) for line in f]

        total_input_tokens = 0
        total_output_tokens = 0
        results = []

        # Process in chunks
        for batch_start in range(0, len(dataset), self.batch_size):
            batch = dataset[batch_start:batch_start + self.batch_size]
            prompts = [item["prompt"] for item in batch]

            # Submit batch — blocks until all complete
            outputs = self.client.batch_generate(
                prompts=prompts,
                max_tokens=self.max_output_tokens,
                temperature=0.0,  # Greedy for reproducibility
                stream=False
            )

            for item, output in zip(batch, outputs):
                results.append({
                    "id": item["id"],
                    "prompt": item["prompt"],
                    "completion": output.text,
                    "input_tokens": output.prompt_tokens,
                    "output_tokens": output.completion_tokens
                })
                total_input_tokens += output.prompt_tokens
                total_output_tokens += output.completion_tokens

        # Write results
        with open(output_path, "w") as f:
            for r in results:
                f.write(json.dumps(r) + "\n")

        return {
            "total_requests": len(dataset),
            "total_input_tokens": total_input_tokens,
            "total_output_tokens": total_output_tokens
        }

CLI Interface

# Submit a batch job to Dynamo
dynamo batch submit \
    --model meta-llama/Llama-2-70b-hf \
    --input-file dataset.jsonl \
    --output-file results.jsonl \
    --max-tokens 512 \
    --temperature 0.0 \
    --tp-size 4 \
    --batch-config dynamo_batch_config.yaml

# Monitor progress
dynamo batch status --job-id <job_id>

# Output:
# Job ID: abc123
# Status: running
# Progress: 7,842 / 10,000 requests (78.4%)
# Throughput: 6,210 tok/s
# ETA: 3m 42s

Throughput Scaling with Dataset Size

📊

Throughput at Scale — Llama 70B INT4, 4xA100

Dataset SizeAvg Input LenAvg Output LenThroughput (tok/s)Total Time
100 512 256 5,420 4.7 s
1,000 512 256 6,180 41.4 s
10,000 512 256 6,350 6m 43s
100,000 512 256 6,380 67m 12s
10,000 2,048 512 5,870 14m 31s
10,000 4,096 512 4,920 22m 18s

Throughput stabilizes after approximately 1,000 requests as the continuous batch reaches steady state. Larger datasets amortize the startup cost (model loading, warmup) and approach peak throughput.

Steady-State Throughput by Input Length

256 tok input
6,520
512 tok input
6,350
1K tok input
6,120
2K tok input
5,870
4K tok input
4,920

Longer inputs reduce throughput because prefill consumes more of the token budget per scheduling step, leaving fewer slots for decode sequences.

Memory Management for Batch Workloads

Batch workloads have predictable memory patterns. The entire dataset can be analyzed upfront to pre-compute memory requirements:

class BatchMemoryPlanner:
    def __init__(self, model_config, gpu_config):
        self.block_size_bytes = compute_block_bytes(model_config)
        self.total_gpu_blocks = compute_available_blocks(
            gpu_config, model_config
        )

    def plan(self, dataset_stats: dict) -> dict:
        """Pre-compute memory plan for the dataset."""
        avg_total_len = (
            dataset_stats["avg_input_len"] +
            dataset_stats["avg_output_len"]
        )
        blocks_per_seq = avg_total_len // 16 + 1

        # Maximum concurrent sequences
        max_concurrent = self.total_gpu_blocks // blocks_per_seq

        # Account for variance: P95 sequence length
        p95_total_len = dataset_stats["p95_total_len"]
        p95_blocks = p95_total_len // 16 + 1

        # Conservative estimate: some sequences will be longer
        safe_concurrent = int(
            self.total_gpu_blocks / (blocks_per_seq * 1.3)
        )

        return {
            "max_concurrent": max_concurrent,
            "safe_concurrent": safe_concurrent,
            "recommended_batch_size": safe_concurrent,
            "estimated_swaps": 0 if safe_concurrent >= max_concurrent else "likely"
        }
💡 Tip

For batch workloads with known sequence length distributions, set max-num-seqs to the safe_concurrent value rather than the theoretical maximum. This avoids preemption entirely, eliminating the wasted computation from recomputing evicted KV cache. Zero preemptions is the target for offline processing.

Multi-Node Batch Processing

For datasets that require more throughput than a single node provides, Dynamo distributes across nodes:

class DistributedBatchCoordinator:
    def __init__(self, num_nodes: int, requests_per_node: int):
        self.num_nodes = num_nodes

    def partition_dataset(self, dataset: list) -> list:
        """Partition dataset across nodes for parallel processing."""
        partitions = [[] for _ in range(self.num_nodes)]

        # Sort by input length for balanced partitioning
        dataset_sorted = sorted(dataset, key=lambda x: len(x["prompt"]))

        # Round-robin assignment (sorted ensures balanced token count)
        for i, item in enumerate(dataset_sorted):
            partitions[i % self.num_nodes].append(item)

        return partitions
# Multi-node batch processing
# Node 0: processes partition 0
dynamo batch submit --partition 0 --total-partitions 4 \
    --input-file dataset.jsonl --output-file results_0.jsonl

# Node 1: processes partition 1
dynamo batch submit --partition 1 --total-partitions 4 \
    --input-file dataset.jsonl --output-file results_1.jsonl

# Merge results
cat results_0.jsonl results_1.jsonl results_2.jsonl results_3.jsonl \
    | sort -t '"' -k 4 > results_merged.jsonl
📊

Multi-Node Batch Scaling — Llama 70B INT4, 100K Requests

NodesGPUs TotalThroughput (tok/s)Scaling EfficiencyTotal Time
1 (4xA100) 4 6,380 100% 67m 12s
2 (8xA100) 8 12,450 97.6% 34m 24s
4 (16xA100) 16 24,200 94.8% 17m 41s
8 (32xA100) 32 46,800 91.7% 9m 9s

Batch processing scales nearly linearly across nodes because there is no inter-node communication — each node processes its partition independently.

Cost Optimization

The key cost metric for batch processing is dollars per million tokens:

def compute_cost_per_million_tokens(
    gpu_type: str,
    num_gpus: int,
    hourly_rate: float,  # $ per GPU-hour
    throughput: float     # tokens per second
) -> float:
    """Compute cost per million output tokens."""
    total_hourly_cost = num_gpus * hourly_rate
    tokens_per_hour = throughput * 3600
    cost_per_million = (total_hourly_cost / tokens_per_hour) * 1e6
    return cost_per_million
📊

Cost per Million Output Tokens — Llama 70B Batch Inference

ConfigurationHourly CostThroughput (tok/s)$/M Tokensvs API Pricing
4xA100 FP16 $12.00 5,120 $0.65 4.6x cheaper
4xA100 INT4 $12.00 6,380 $0.52 5.8x cheaper
2xH100 FP8 $10.00 7,840 $0.35 8.6x cheaper
4xH100 INT4 $20.00 14,200 $0.39 7.7x cheaper
API (typical) --- --- $3.00 baseline

Cost per Million Tokens by Config

4xA100 FP16
0.65
4xA100 INT4
0.52
2xH100 FP8
0.35
4xH100 INT4
0.39
API pricing
3
Performance

Self-hosted batch inference on Dynamo is 5-9x cheaper than API pricing for Llama 70B-class models. The break-even point for self-hosting versus API depends on utilization: at 80%+ GPU utilization (achievable with batch workloads), self-hosting wins. Below 30% utilization, API pricing is more cost-effective due to zero idle cost.

Practical Configuration Checklist

# Complete batch inference launch command
dynamo serve \
    --model meta-llama/Llama-2-70b-hf \
    --quantization gptq \
    --tensor-parallel-size 4 \
    --max-model-len 4096 \
    --max-num-seqs 256 \
    --gpu-memory-utilization 0.95 \
    --swap-space 128 \
    --disable-log-requests \
    --disable-frontend-multiprocessing \
    --mode batch \
    --batch-config dynamo_batch_config.yaml

Checklist for maximum batch throughput:

  1. Set gpu-memory-utilization to 0.95 (no headroom needed)
  2. Use INT4 quantization (GPTQ/AWQ with Marlin) for maximum KV cache
  3. Set max-num-seqs to the safe concurrent value from memory planning
  4. Disable streaming, request logging, and frontend multiprocessing
  5. Use continuous batching with chunked prefill
  6. Set preemption mode to none
  7. Pre-sort dataset by input length for balanced scheduling
  8. Allocate swap space on CPU for rare long sequences

Summary

NVIDIA Dynamo’s batch inference mode inverts the online serving optimization target: latency is irrelevant, only throughput per dollar matters. Key configuration changes include raising gpu-memory-utilization to 0.95, disabling streaming and preemption, and increasing max-num-seqs to fill GPU compute capacity. Continuous batching with chunked prefill achieves 93% GPU utilization, producing 6,380 tok/s on 4xA100 with Llama 70B INT4. Multi-node scaling is near-linear (97.6% efficiency at 2 nodes) because partitions are independent. Self-hosted batch processing costs $0.35-0.65 per million tokens, 5-9x cheaper than API pricing. The critical optimization is eliminating preemptions through conservative batch sizing based on pre-computed memory plans from dataset statistics.