Part of Series Frontier Model Architectures 32 of 27
1 Kimi K2: How Moonshot Built a 1T MoE That Rivals Claude and GPT-4o 2 MiniMax-01: Lightning Attention, 4M Token Context, and the Linear Attention Revival 3 Frontier Models in 2025: The Architectural Convergence and Where Innovation Happens 4 Llama 3 Architecture Decisions: Why Meta Chose Dense, GQA-8, 128K Vocab, and RoPE 5 Qwen 2.5: Alibaba's Architecture, Training Recipe, and What Makes It Competitive 6 Gemini: Google's Natively Multimodal Architecture and the 1M Token Context 7 Claude Architecture: Constitutional AI, RLHF at Scale, and the 200K Context Window 8 Grok: xAI's Architecture, Massive Scale, and Real-Time Information Integration 9 Open vs Closed Models in 2026: Llama vs GPT-4o vs Claude vs DeepSeek — The Capability Gap Analysis 10 DeepSeek-R1: The Architecture of Reasoning — GRPO Training, Multi-Stage Pipeline, and Open Weights 11 Phi and Small Language Models: How Microsoft Achieves GPT-3.5 Quality at 3B Parameters 12 Mistral and the Sliding Window: Efficient Long-Context with Linear Memory 13 Llama 4: Meta's Shift to Multimodal MoE and What It Signals 14 Training Infrastructure: How Frontier Labs Build Their GPU Clusters 15 Benchmark Deep Dive: What MMLU, HumanEval, MATH, and SWE-bench Actually Measure 16 Jamba: AI21's Hybrid Mamba-Attention Architecture 17 Yi Series: 01.AI's Bilingual Architecture 18 DBRX: Databricks' Enterprise MoE Architecture 19 OpenAI o1: Reasoning Compute Budgets and Internal CoT 20 Distilled Models: Phi, Gemma, Llama 3.2 at Small Scale 21 Llama 4: Meta's Shift to Multimodal MoE 22 Scaling Laws and Model Design: How Chinchilla Changed Architecture Decisions 23 Open Weight Release Strategy: Llama vs Mistral vs DeepSeek — Licensing and Ecosystem Impact 24 Safety Architecture: How Frontier Models Build Guardrails Into the Model Itself 25 Multimodal Model Comparison 2026: GPT-4o vs Gemini vs Claude vs Llama Vision 26 MoE vs Dense in Production: Serving Cost, Latency, and When Each Wins 27 Chinese Frontier Models: DeepSeek, Qwen, Yi, and Kimi — Architecture Comparison

A 512x512 image costs 170 tokens in GPT-4o, 258 tokens in Gemini 2.0, and 1,024 tokens in Llama Vision. The difference is architectural: GPT-4o uses a learned vision encoder that extracts compressed visual features, Gemini 2.0 uses native multimodal training where images are first-class tokens from pretraining, and Llama Vision uses CLIP patches with minimal compression. When you process 10,000 images per hour, the 6x token efficiency gap translates directly to compute cost and latency.

Architecture Overview

Each model takes a fundamentally different approach to multimodal fusion:

GPT-4o:     Early Fusion — visual tokens concatenated with text tokens
            All modalities share the same transformer backbone

Gemini 2.0: Early Fusion — native multimodal from pretraining
            Vision, audio, video encoded then interleaved with text

Claude 3.5: Cross-Attention — visual features injected at select layers
            Separate vision encoder, features attend into LLM layers

Llama 3.2:  Cross-Attention — adapter layers between ViT and LLM
            Open-weight, published architecture details

Why Architecture Matters for Inference

The fusion strategy determines:

  1. Token count: Early fusion models consume context window with visual tokens. Cross-attention models keep visual features separate.
  2. KV cache cost: Visual tokens in early fusion occupy KV cache entries. Cross-attention models store visual features in a separate cache.
  3. Scaling with image count: Multi-image inputs scale linearly in context consumption for early fusion but can share encoder computation for cross-attention.

GPT-4o Architecture

GPT-4o (Omni) processes all modalities through a single transformer. Based on public information and inference behavior:

# GPT-4o multimodal processing (inferred architecture)
class GPT4oMultimodal:
    def __init__(self):
        # Vision encoder: proprietary, likely CLIP-variant
        self.vision_encoder = VisionEncoder(
            patch_size=14,
            resolution_tiers=[512, 768, 1024, 2048],
            output_dim=4096  # Projected to match LLM hidden dim
        )
        self.audio_encoder = WhisperVariant(output_dim=4096)
        self.transformer = GPT4Transformer(
            hidden_dim=4096,  # Estimated
            num_layers=120,   # Estimated for ~200B active params
            num_heads=64
        )

    def encode_image(self, image) -> torch.Tensor:
        # Dynamic resolution: higher res = more tokens
        tiles = self.tile_image(image)  # Split into 512x512 tiles
        tokens_per_tile = (512 // 14) ** 2  # ~1,369 tokens
        # Plus a low-res overview: 256 tokens

        visual_tokens = []
        for tile in tiles:
            features = self.vision_encoder(tile)
            visual_tokens.append(features)

        # Concatenate with separator tokens
        return torch.cat(visual_tokens, dim=0)

Token budget analysis for GPT-4o:

📊

GPT-4o Visual Token Count by Image Configuration

Image SizeTilesTokens/TileOverview TokensTotal Visual Tokens
512x512 (low) 1 1,369 256 1,625
1024x1024 4 1,369 256 5,732
2048x2048 16 1,369 256 22,160
1920x1080 (HD) 8 1,369 256 11,208
Multiple (4 images) 16+ 1,369 1,024 22,928+
ℹ️ Note

GPT-4o’s early fusion means every visual token occupies one position in the 128K context window and generates a full KV cache entry. A 2048x2048 image consumes 22,160 tokens — equivalent to roughly 44 pages of text. This directly reduces the remaining context available for conversation.

Gemini 2.0 Architecture

Gemini was designed as natively multimodal from pretraining, processing images, video, and audio alongside text:

# Gemini 2.0 multimodal processing (inferred from published papers)
class GeminiMultimodal:
    def __init__(self):
        # SigLIP-based vision encoder
        self.vision_encoder = SigLIPEncoder(
            variant="SO400M",
            output_dim=1152,
            num_registers=4  # Vision register tokens
        )
        self.vision_projector = nn.Linear(1152, 8192)

        # Audio: USM-based encoder
        self.audio_encoder = USMEncoder(output_dim=8192)

        # Video: frame sampling + temporal encoding
        self.video_processor = VideoFrameSampler(max_fps=1)

        # Core transformer: MoE
        self.transformer = GeminiTransformer(
            hidden_dim=8192,
            num_layers=64,   # Dense equivalent
            num_experts=16,
            active_experts=2,
            num_heads=64
        )

    def process_multimodal(self, text, images, video, audio):
        text_tokens = self.tokenize(text)

        visual_tokens = []
        for img in images:
            features = self.vision_encoder(img)
            projected = self.vision_projector(features)
            visual_tokens.extend(projected)

        if video is not None:
            frames = self.video_processor.sample(video)
            for frame in frames:
                features = self.vision_encoder(frame)
                visual_tokens.extend(self.vision_projector(features))

        audio_tokens = []
        if audio is not None:
            audio_tokens = self.audio_encoder(audio)

        # Interleave all modalities
        combined = self.interleave(text_tokens, visual_tokens, audio_tokens)
        return self.transformer(combined)

Gemini’s MoE backbone means visual tokens activate the same expert routing as text tokens. Different experts may specialize in visual vs textual processing, but the routing is learned during pretraining.

Gemini Token Efficiency

Gemini uses more aggressive token compression for images:

# Gemini's visual token compression
# Standard ViT: 224x224 / 14x14 = 256 tokens
# Gemini appears to use token merging:
# - Initial: 256 tokens per 224x224 patch
# - Merge redundant tokens: ~128 tokens per patch
# - For 1024x1024: 4 patches * 128 = 512 tokens

# Compared to GPT-4o for same 1024x1024 image:
# GPT-4o: ~5,732 tokens
# Gemini: ~512 tokens (estimated)
# 11x more token-efficient

Claude 3.5 Architecture

Claude uses a cross-attention architecture that keeps visual features separate from the main token sequence:

# Claude 3.5 multimodal (inferred from behavior and Anthropic research)
class ClaudeMultimodal:
    def __init__(self):
        self.vision_encoder = ViTLargeEncoder(
            patch_size=14,
            resolution=1024,
            output_dim=1024
        )
        self.vision_projector = MLPProjector(
            input_dim=1024,
            hidden_dim=4096,
            output_dim=8192  # Match LLM hidden dim
        )

        self.transformer = ClaudeTransformer(
            hidden_dim=8192,
            num_layers=80,  # Estimated
            num_heads=64,
            # Cross-attention layers at every 4th layer
            cross_attn_interval=4,
            cross_attn_heads=16
        )

    def forward(self, text_tokens, images):
        # Encode images separately
        visual_features = []
        for img in images:
            features = self.vision_encoder(img)
            projected = self.vision_projector(features)
            visual_features.append(projected)

        # Stack visual features
        visual_context = torch.cat(visual_features, dim=0)
        # Shape: [total_visual_tokens, hidden_dim]

        # Text tokens go through main transformer
        # Every 4th layer has cross-attention to visual_context
        hidden = self.transformer.embed(text_tokens)

        for i, layer in enumerate(self.transformer.layers):
            hidden = layer.self_attention(hidden)

            if i % 4 == 3:  # Cross-attention every 4 layers
                hidden = layer.cross_attention(
                    query=hidden,
                    key=visual_context,
                    value=visual_context
                )

            hidden = layer.mlp(hidden)

        return hidden

The cross-attention design means visual tokens do NOT consume positions in the context window. A 1024x1024 image generates visual features that are accessed via cross-attention, but the 200K token context remains fully available for text.

Performance

Cross-attention has a distinct inference cost profile. The visual features are encoded once during prefill. During decode, each cross-attention layer adds a small attention computation (O(1×V)O(1 \times V) where VV is the number of visual tokens), but no visual tokens occupy the KV cache of self-attention layers. This makes Claude more efficient per decode step when images are present.

Llama 3.2 Vision Architecture

Llama 3.2 Vision is the only model with published architecture details:

# Llama 3.2 Vision - exact architecture from Meta's paper
class Llama32Vision:
    def __init__(self):
        # Vision encoder: ViT-H/14
        self.vision_encoder = ViTHuge(
            patch_size=14,
            image_size=560,      # 560x560 input
            hidden_dim=1280,
            num_layers=32,
            num_heads=16
        )
        # Cross-attention projector
        self.vision_projector = nn.Sequential(
            nn.Linear(1280, 4096),
            nn.GELU(),
            nn.Linear(4096, 4096)
        )

        # Core LLM with cross-attention adapters
        # 11B and 90B variants
        self.llm = LlamaTransformer(
            hidden_dim=4096,     # 11B variant
            num_layers=32,
            num_heads=32,
            # Cross-attention inserted at layers [3, 8, 13, 18, 23, 28]
            cross_attn_layers=[3, 8, 13, 18, 23, 28]
        )

    def forward(self, text_input_ids, images):
        # Encode image
        # 560/14 = 40, so 40*40 = 1600 patches
        visual_features = self.vision_encoder(images)
        # Shape: [batch, 1600, 1280]

        visual_projected = self.vision_projector(visual_features)
        # Shape: [batch, 1600, 4096]

        # LLM forward with cross-attention
        hidden = self.llm.embed(text_input_ids)

        for i, layer in enumerate(self.llm.layers):
            hidden = layer.self_attention(hidden)

            if i in self.llm.cross_attn_layers:
                hidden = layer.cross_attention(
                    query=hidden,
                    key_value=visual_projected
                )

            hidden = layer.mlp(hidden)

        return self.llm.lm_head(hidden)
📊

Visual Tokens per Image by Model

ModelArchitecture1024x1024 TokensContext UsedKV Cache Impact
GPT-4o Early Fusion ~5,732 5,732 of 128K Full KV entries
Gemini 2.0 Early Fusion ~512 512 of 2M Full KV entries
Claude 3.5 Cross-Attention ~4,096 0 of 200K Separate cache
Llama 3.2 (90B) Cross-Attention 1,600 0 of 128K Separate cache
Llama 3.2 (11B) Cross-Attention 1,600 0 of 128K Separate cache

Inference Cost Comparison

The architectural differences translate directly to inference costs:

# Cost model for multimodal inference
def compute_multimodal_cost(
    model_type: str,
    text_tokens: int,
    visual_tokens: int,
    output_tokens: int,
    num_layers: int,
    hidden_dim: int,
    num_kv_heads: int,
    head_dim: int,
    cross_attn_layers: int = 0
) -> dict:
    # KV cache per token per layer (bytes, FP16)
    kv_per_token = num_kv_heads * head_dim * 2 * 2  # K+V

    if model_type == "early_fusion":
        # Visual tokens are in KV cache
        total_kv_tokens = text_tokens + visual_tokens + output_tokens
        kv_cache_bytes = total_kv_tokens * kv_per_token * num_layers

        # Prefill FLOPs: all tokens through all layers
        prefill_tokens = text_tokens + visual_tokens
        prefill_flops = prefill_tokens * num_layers * hidden_dim * 12

        # Decode FLOPs per step: attention over all cached tokens
        decode_attn_flops = (total_kv_tokens * num_layers *
                            hidden_dim * 4)

    elif model_type == "cross_attention":
        # Visual tokens NOT in self-attention KV cache
        self_attn_tokens = text_tokens + output_tokens
        kv_cache_bytes = self_attn_tokens * kv_per_token * num_layers

        # Plus cross-attention KV cache (only at cross-attn layers)
        cross_kv = visual_tokens * kv_per_token * cross_attn_layers
        kv_cache_bytes += cross_kv

        # Prefill: text through all layers + cross-attn
        prefill_flops = text_tokens * num_layers * hidden_dim * 12
        prefill_flops += (text_tokens * visual_tokens *
                         cross_attn_layers * hidden_dim * 4)

        # Decode: self-attention + cross-attention
        decode_attn_flops = (self_attn_tokens * num_layers *
                            hidden_dim * 4)
        decode_attn_flops += (visual_tokens * cross_attn_layers *
                             hidden_dim * 4)

    return {
        "kv_cache_mb": kv_cache_bytes / 1e6,
        "prefill_tflops": prefill_flops / 1e12,
        "decode_attn_gflops": decode_attn_flops / 1e9
    }
📊

Inference Cost for 1 Image + 512 Text Tokens + 256 Output Tokens

ModelKV Cache (MB)Prefill Cost (relative)Decode Cost/Step (relative)
GPT-4o (early, ~5K vis) 198 1.00x 1.00x
Gemini 2.0 (early, ~512 vis) 32 0.18x 0.18x
Claude 3.5 (cross, ~4K vis) 28 + 45 cross 0.65x 0.42x
Llama 90B (cross, 1600 vis) 24 + 12 cross 0.52x 0.35x
Text-only baseline (512 tok) 16 0.09x 0.09x

Video Understanding Comparison

Video extends the token count problem:

📊

Token Cost for 30-Second Video (1 FPS, 30 Frames)

ModelTokens/FrameTotal Video TokensContext UsageFeasibility
GPT-4o ~1,625 ~48,750 38% of 128K Tight
Gemini 2.0 ~128 ~3,840 0.2% of 2M Easy
Claude 3.5 ~4,096 ~122,880 0% context used Feasible
Llama 3.2 90B 1,600 48,000 0% context used Feasible

Gemini’s 2M context window and aggressive token compression make it the most capable for long video understanding. GPT-4o’s early fusion means a 30-second video at full resolution consumes nearly 40% of the context window.

Remaining Context After 30s Video

GPT-4o
79,250
Gemini 2.0
1,996,160
Claude 3.5
200,000
Llama 3.2 90B
128,000

Audio Processing Comparison

# Audio modality support as of 2026
audio_support = {
    "GPT-4o": {
        "input": True,
        "output": True,   # Can generate speech
        "encoder": "Whisper-variant",
        "native": True,    # Audio tokens in same transformer
        "latency": "real-time capable"
    },
    "Gemini 2.0": {
        "input": True,
        "output": True,   # Can generate speech
        "encoder": "USM-based",
        "native": True,
        "latency": "real-time capable"
    },
    "Claude 3.5": {
        "input": False,    # No native audio input
        "output": False,
        "encoder": None,
        "native": False,
        "latency": "N/A"
    },
    "Llama 3.2": {
        "input": False,    # Vision only
        "output": False,
        "encoder": None,
        "native": False,
        "latency": "N/A"
    }
}
ℹ️ Note

As of early 2026, only GPT-4o and Gemini 2.0 support native audio input and output. Claude and Llama process audio through separate speech-to-text pipelines before feeding text to the LLM. This means GPT-4o and Gemini can reason about tone, emphasis, and acoustic features directly, while Claude and Llama are limited to the information preserved in the transcript.

Practical Serving Cost Comparison

For a production deployment processing 1 million multimodal requests per day:

📊

Daily Serving Cost for 1M Multimodal Requests (1 image + 500 text + 200 output each)

ModelSelf-Hosted GPUsGPU Cost/DayAPI Cost/DayCheapest Option
GPT-4o N/A (API only) --- $3,500 API: $3,500
Gemini 2.0 N/A (API only) --- $1,200 API: $1,200
Claude 3.5 N/A (API only) --- $2,800 API: $2,800
Llama 3.2 90B 8xA100 $576 --- Self-host: $576
Llama 3.2 11B 1xA100 $72 --- Self-host: $72

Daily Cost per 1M Multimodal Requests

Llama 11B (self)
72
Llama 90B (self)
576
Gemini API
1,200
Claude API
2,800
GPT-4o API
3,500

Architectural Trade-offs Summary

Early Fusion (GPT-4o, Gemini):
  + Unified representation: model can jointly reason across modalities
  + Simplest architecture: one transformer for everything
  - Visual tokens consume context window
  - KV cache grows with visual input
  - Expensive for multi-image / video inputs

Cross-Attention (Claude, Llama):
  + Visual features don't consume context window
  + KV cache for self-attention is text-only sized
  + Efficient for dense visual inputs (many images)
  - Separate encoder adds latency during prefill
  - Cross-attention layers add parameters
  - Visual reasoning limited to cross-attention capacity

MoE + Early Fusion (Gemini):
  + Token compression reduces visual token count dramatically
  + Massive context window (2M) handles any input combination
  + Experts can specialize per modality
  - Requires enormous compute budget for pretraining
  - MoE routing adds complexity for serving

Summary

Frontier multimodal models in 2026 split into two architectural camps: early fusion (GPT-4o, Gemini) and cross-attention (Claude, Llama). Early fusion concatenates visual tokens with text in a single transformer, providing unified reasoning but consuming context window proportionally — GPT-4o uses approximately 5,700 tokens per 1024x1024 image while Gemini compresses to approximately 512 through token merging. Cross-attention keeps visual features in a separate cache accessed at specific transformer layers, preserving the full text context window but adding cross-attention compute at 6-20 layers. For inference cost, Gemini’s aggressive compression and 2M context make it the most efficient for multi-image and video workloads. For self-hosted deployments, Llama 3.2 Vision’s published architecture enables optimized serving at 5-50x lower cost than API pricing, with 1,600 visual tokens per image through its cross-attention design.