About Qwen3.5-Flash
The Qwen3.5 native vision-language Flash models are built on a hybrid architecture that integrates a linear attention mechanism with a sparse mixture-of-experts model, achieving higher inference efficiency. Compared to the 3 series, these models deliver a leap forward in performance for both pure text and multimodal tasks, offering fast response times while balancing inference speed and overall performance.
Specifications
- Provider
- Qwen
- Context Length
- 1,000,000 tokens
- Input Types
- text, image, video
- Output Types
- text
- Category
- Qwen3
- Added
- 2/25/2026