AI Model

Qwen: Qwen3 235B A22B

Qwen: Qwen3 235B A22B logoQwen
Text Generation
Reasoning
About Qwen3 235B A22B

Qwen3-235B-A22B is a 235B parameter mixture-of-experts (MoE) model developed by Qwen, activating 22B parameters per forward pass. It supports seamless switching between a "thinking" mode for complex reasoning, math, and code tasks, and a "non-thinking" mode for general conversational efficiency. The model demonstrates strong reasoning ability, multilingual support (100+ languages and dialects), advanced instruction-following, and agent tool-calling capabilities. It natively handles a 32K token context window and extends up to 131K tokens using YaRN-based scaling.

Specifications
Provider
Qwen
Context Length
40,960 tokens
Input Types
text
Output Types
text
Category
Qwen3
Added
4/28/2025

Benchmark Performance

How Qwen3 235B A22B compares to its closest rivals across industry benchmarks

Artificial Analysis Intelligence Index
Comprehensive comparison of AI models across intelligence, price, speed, and latency

Metric: Intelligence Index

#27
Nova 2.0 Lite (medium)
58.0
#28
DeepSeek V3.1 Terminus
58.0
#29
Nova 2.0 Pro Preview (low)
58.0
#30
57.0
#31
Doubao Seed Code
57.0
#32
Grok 3 mini Reasoning (high)
57.0
#33
Apriel-v1.6-15B-Thinker
57.0
#34
Nova 2.0 Omni (medium)
56.0

Frequently Asked Questions

Common questions about Qwen3 235B A22B

Use Qwen3 235B A22B and 200+ more models

Access all the best AI models in one platform. No API keys, no switching between apps.