Qwen2.5 32B Instruct is the instruction-tuned variant of the latest Qwen large language model series. It provides enhanced instruction-following capabilities, improved proficiency in coding and mathematical reasoning, and robust handling of structured data and outputs such as JSON. It supports long-context processing up to 128K tokens and multilingual tasks across 29+ languages. The model has 32.5 billion parameters, 64 layers, and utilizes an advanced transformer architecture with RoPE, SwiGLU, RMSNorm, and Attention QKV bias.
For more details, please refer to the [Qwen2.5 Blog](https://qwenlm.github.io/blog/qwen2.5/) .
- Provider
- Qwen (Alibaba)
- Context Length
- 131,072 tokens
- Input Types
- text
- Output Types
- text
- Category
- Qwen
- Added
- 3/3/2025
Frequently Asked Questions
Common questions about Qwen2.5 32B Instruct