AI Model

Meta: Llama 4 Scout

Meta: Llama 4 Scout logoMeta
Text Generation
Vision
About Llama 4 Scout

Llama 4 Scout 17B Instruct (16E) is a mixture-of-experts (MoE) language model developed by Meta, activating 17 billion parameters out of a total of 109B. It supports native multimodal input (text and image) and multilingual output (text and code) across 12 supported languages. Designed for assistant-style interaction and visual reasoning, Scout uses 16 experts per forward pass and features a context length of 10 million tokens, with a training corpus of ~40 trillion tokens.

Built for high efficiency and local or commercial deployment, Llama 4 Scout incorporates early fusion for seamless modality integration. It is instruction-tuned for use in multilingual chat, captioning, and image understanding tasks. Released under the Llama 4 Community License, it was last trained on data up to August 2024 and launched publicly on April 5, 2025.

Specifications
Provider
Meta
Context Length
327,680 tokens
Input Types
text, image
Output Types
text
Category
Llama4
Added
4/5/2025

Benchmark Performance

How Llama 4 Scout compares to its closest rivals across industry benchmarks

SWE-Bench Verified
Evaluates AI ability to resolve real GitHub issues from Python repositories

Metric: % Resolved

#21
gpt-oss-120b
26.0
#25
Gemini 2.0 flash
13.5
#26
9.1
#27
Qwen2.5-Coder 32B Instruct
9.0
Artificial Analysis Intelligence Index
Comprehensive comparison of AI models across intelligence, price, speed, and latency

Metric: Intelligence Index

#120
Solar Pro 2
30.0
#121
Qwen3 Omni 30B A3B
30.0
#123
Mistral Small 3.2
29.0
#124
Ministral 8B (Dec '25)
28.0
#125
Llama 4 Scout
This model
28.0
#126
Llama 3.1 405B
28.0
#127
Llama 3.3 70B
28.0
#128
Devstral Medium
28.0
#129
Ling-mini-2.0
28.0

Frequently Asked Questions

Common questions about Llama 4 Scout

Use Llama 4 Scout and 200+ more models

Access all the best AI models in one platform. No API keys, no switching between apps.