AI Model

DeepSeek: DeepSeek R1 Zero

DeepSeek: DeepSeek R1 Zero logoDeepSeek
Text Generation
Reasoning
About DeepSeek R1 Zero

DeepSeek-R1-Zero is a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step. It's 671B parameters in size, with 37B active in an inference pass.

It demonstrates remarkable performance on reasoning. With RL, DeepSeek-R1-Zero naturally emerged with numerous powerful and interesting reasoning behaviors.

DeepSeek-R1-Zero encounters challenges such as endless repetition, poor readability, and language mixing. See [DeepSeek R1](/deepseek/deepseek-r1) for the SFT model.

Specifications
Provider
DeepSeek
Context Length
163,840 tokens
Input Types
text
Output Types
text
Category
Other
Added
3/6/2025

Frequently Asked Questions

Common questions about DeepSeek R1 Zero

Use DeepSeek R1 Zero and 200+ more models

Access all the best AI models in one platform. No API keys, no switching between apps.