--- tags: - finetuned - quantized - 4-bit - AWQ - transformers - pytorch - mistral - instruct - text-generation - conversational - license:apache-2.0 - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us - finetune - chatml - DPO - RLHF - gpt4 - synthetic data - distillation model-index: - name: Nous-Hermes-2-Mistral-7B-DPO results: [] datasets: - teknium/OpenHermes-2.5 license: apache-2.0 language: - en quantized_by: Suparious pipeline_tag: text-generation model_creator: NousResearch model_name: Nous Hermes 2 - Mistral 7B - DPO inference: false prompt_template: '<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ' --- # Nous Hermes 2 - Mistral 7B - DPO - Model creator: [NousResearch](https://huggingface.co/NousResearch) - Original model: [OpenHermes Mistral 2.5 7B DPO](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) The model prior to DPO was trained on 1,000,000 instructions/chats of GPT-4 quality or better, primarily synthetic data as well as other high quality datasets, available from the repository [teknium/OpenHermes-2.5](https://huggingface.co/datasets/teknium/OpenHermes-2.5). ```plaintext @misc{Nous-Hermes-2-Mistral-7B-DPO, url={[https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO)}, title={Nous Hermes 2 Mistral 7B DPO}, author={"Teknium", "theemozilla", "karan4d", "huemin_art"} } ``` ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/PDleZIZK3vE3ATfXRRySv.png) ## Model Description Nous Hermes 2 on Mistral 7B DPO is the new flagship 7B Hermes! This model was DPO'd from [Teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) and has improved across the board on all benchmarks tested - AGIEval, BigBench Reasoning, GPT4All, and TruthfulQA. The model prior to DPO was trained on 1,000,000 instructions/chats of GPT-4 quality or better, primarily synthetic data as well as other high quality datasets, available from the repository [teknium/OpenHermes-2.5](https://huggingface.co/datasets/teknium/OpenHermes-2.5). ## Thank you to FluidStack for sponsoring compute for this model