About Codestral Mamba
A 7.3B parameter Mamba-based model designed for code and reasoning tasks.
- Linear time inference, allowing for theoretically infinite sequence lengths - 256k token context window - Optimized for quick responses, especially beneficial for code productivity - Performs comparably to state-of-the-art transformer models in code and reasoning tasks - Available under the Apache 2.0 license for free use, modification, and distribution
Specifications
- Provider
- Mistral AI
- Context Length
- 256,000 tokens
- Input Types
- text
- Output Types
- text
- Category
- Mistral
- Added
- 7/19/2024