About Qwen-Max
Qwen-Max, based on Qwen2.5, provides the best inference performance among [Qwen models](/qwen), especially for complex multi-step tasks. It's a large-scale MoE model that has been pretrained on over 20 trillion tokens and further post-trained with curated Supervised Fine-Tuning (SFT) and Reinforcement Learning from Human Feedback (RLHF) methodologies. The parameter count is unknown.
Specifications
- Provider
- Qwen
- Context Length
- 32,768 tokens
- Input Types
- text
- Output Types
- text
- Category
- Qwen
- Added
- 2/1/2025