WritingmateWritingmate
AI Model

Qrwkv 72B

Featherless
Text Generation
Reasoning
About Qrwkv 72B

Qrwkv-72B is a linear-attention RWKV variant of the Qwen 2.5 72B model, optimized to significantly reduce computational cost at scale. Leveraging linear attention, it achieves substantial inference speedups (>1000x) while retaining competitive accuracy on common benchmarks like ARC, HellaSwag, Lambada, and MMLU. It inherits knowledge and language support from Qwen 2.5, supporting approximately 30 languages, making it suitable for efficient inference in large-context applications.

Specifications
Provider
Featherless
Context Length
32,768 tokens
Input Types
text
Output Types
text
Category
Other
Added
3/20/2025

Use Qrwkv 72B and 200+ more models

Access all the best AI models in one platform. No API keys, no switching between apps.