About Mixtral 8x22B (base)
Mixtral 8x22B is a large-scale language model from Mistral AI. It consists of 8 experts, each 22 billion parameters, with each token using 2 experts at a time.
It was released via [X](https://twitter.com/MistralAI/status/1777869263778291896).
#moe
Specifications
- Provider
- Mistral AI
- Context Length
- 65,536 tokens
- Input Types
- text
- Output Types
- text
- Category
- Mistral
- Added
- 4/10/2024