About Mistral 7B Instruct v0.2
A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length.
An improved version of [Mistral 7B Instruct](/modelsmistralai/mistral-7b-instruct-v0.1), with the following changes:
- 32k context window (vs 8k context in v0.1) - Rope-theta = 1e6 - No Sliding-Window Attention
Specifications
- Provider
- Mistral AI
- Context Length
- 32,768 tokens
- Input Types
- text
- Output Types
- text
- Category
- Mistral
- Added
- 12/28/2023