Eagle 7B is trained on 1.1 Trillion Tokens across 100+ world languages (70% English, 15% multilang, 15% code).
- Built on the [RWKV-v5](/models?q=rwkv) architecture (a linear transformer with 10-100x+ lower inference cost) - Ranks as the world's greenest 7B model (per token) - Outperforms all 7B class models in multi-lingual benchmarks - Approaches Falcon (1.5T), LLaMA2 (2T), Mistral (>2T?) level of performance in English evals - Trade blows with MPT-7B (1T) in English evals - All while being an ["Attention-Free Transformer"](https://www.isattentionallyouneed.com/)
Eagle 7B models are provided for free, by [Recursal.AI](https://recursal.ai), for the beta period till end of March 2024
Find out more [here](https://blog.rwkv.com/p/eagle-7b-soaring-past-transformers)
[rnn](/models?q=rwkv)
- Provider
- Recursal
- Context Length
- 10,000 tokens
- Input Types
- text
- Output Types
- text
- Category
- RWKV
- Added
- 1/29/2024