About OpenChat 3.5 7B
OpenChat 7B is a library of open-source language models, fine-tuned with "C-RLFT (Conditioned Reinforcement Learning Fine-Tuning)" - a strategy inspired by offline reinforcement learning. It has been trained on mixed-quality data without preference labels.
- For OpenChat fine-tuned on Mistral 7B, check out [OpenChat 7B](/models/openchat/openchat-7b). - For OpenChat fine-tuned on Llama 8B, check out [OpenChat 8B](/models/openchat/openchat-8b).
#open-source
Specifications
- Provider
- Openchat
- Context Length
- 8,192 tokens
- Input Types
- text
- Output Types
- text
- Category
- Mistral
- Added
- 11/28/2023