|
--- |
|
base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO |
|
tags: |
|
- Mistral |
|
- instruct |
|
- finetune |
|
- chatml |
|
- DPO |
|
- RLHF |
|
- gpt4 |
|
- synthetic data |
|
- distillation |
|
- awq |
|
model-index: |
|
- name: Nous-Hermes-2-Mistral-7B-DPO |
|
results: [] |
|
license: apache-2.0 |
|
language: |
|
- en |
|
datasets: |
|
- teknium/OpenHermes-2.5 |
|
--- |
|
# Nous Hermes 2 - Mistral 7B - DPO - AWQ |
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/PDleZIZK3vE3ATfXRRySv.png) |
|
|
|
## Model Description |
|
|
|
This repo contains the AWQ quantized version of the `Nous Hermes 2 - Mistral 7B - DPO` model. |
|
It was quantized with AutoAWQ using the following settings: |
|
```json |
|
{"zero_point": True, "q_group_size": 128, "w_bit": 4, "version": "GEMM"} |
|
``` |
|
|