About R1 Distill Qwen 14B
DeepSeek R1 Distill Qwen 14B is a distilled large language model based on [Qwen 2.5 14B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B), using outputs from [DeepSeek R1](/deepseek/deepseek-r1). It outperforms OpenAI's o1-mini across various benchmarks, achieving new state-of-the-art results for dense models.
Other benchmark results include:
- AIME 2024 pass@1: 69.7 - MATH-500 pass@1: 93.9 - CodeForces Rating: 1481
The model leverages fine-tuning from DeepSeek R1's outputs, enabling competitive performance comparable to larger frontier models.
Specifications
- Provider
- DeepSeek
- Context Length
- 32,768 tokens
- Input Types
- text
- Output Types
- text
- Category
- Qwen
- Added
- 1/29/2025
Frequently Asked Questions
Common questions about R1 Distill Qwen 14B