AI Model

DeepSeek: DeepSeek V3.2

DeepSeek: DeepSeek V3.2 logoDeepSeek
Text Generation
Reasoning
About DeepSeek V3.2

DeepSeek-V3.2 is a large language model designed to harmonize high computational efficiency with strong reasoning and agentic tool-use performance. It introduces DeepSeek Sparse Attention (DSA), a fine-grained sparse attention mechanism that reduces training and inference cost while preserving quality in long-context scenarios. A scalable reinforcement learning post-training framework further improves reasoning, with reported performance in the GPT-5 class, and the model has demonstrated gold-medal results on the 2025 IMO and IOI. V3.2 also uses a large-scale agentic task synthesis pipeline to better integrate reasoning into tool-use settings, boosting compliance and generalization in interactive environments.

Users can control the reasoning behaviour with the `reasoning` `enabled` boolean. [Learn more in our docs](https://openrouter.ai/docs/use-cases/reasoning-tokens#enable-reasoning-with-default-config)

Specifications
Provider
DeepSeek
Context Length
163,840 tokens
Input Types
text
Output Types
text
Category
DeepSeek
Added
12/1/2025

Frequently Asked Questions

Common questions about DeepSeek V3.2

Use DeepSeek V3.2 and 200+ more models

Access all the best AI models in one platform. No API keys, no switching between apps.