WritingmateWritingmate
AI Model

Nous: Hermes 2 Mixtral 8x7B DPO

Nousresearch
Text Generation
About Hermes 2 Mixtral 8x7B DPO

Nous Hermes 2 Mixtral 8x7B DPO is the new flagship Nous Research model trained over the [Mixtral 8x7B MoE LLM](/models/mistralai/mixtral-8x7b).

The model was trained on over 1,000,000 entries of primarily [GPT-4](/models/openai/gpt-4) generated data, as well as other high quality data from open datasets across the AI landscape, achieving state of the art performance on a variety of tasks.

#moe

Specifications
Provider
Nousresearch
Context Length
32,768 tokens
Input Types
text
Output Types
text
Category
Mistral
Added
1/16/2024

Use Hermes 2 Mixtral 8x7B DPO and 200+ more models

Access all the best AI models in one platform. No API keys, no switching between apps.