About Mistral 7B Instruct v0.1
A 7.3B parameter model that outperforms Llama 2 13B on all benchmarks, with optimizations for speed and context length.
Specifications
- Provider
- Mistral AI
- Context Length
- 2,824 tokens
- Input Types
- text
- Output Types
- text
- Category
- Mistral
- Added
- 9/28/2023