File size: 1,106 Bytes
f5b6464 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 |
---
base_model: mistralai/Mistral-7b-V0.1
tags:
- llama-2
- instruct
- finetune
- alpaca
- gpt4
- synthetic data
- distillation
datasets:
- jondurbin/airoboros-2.2.1
model-index:
- name: airoboros2.2-mistral-7b
results: []
license: mit
language:
- en
---
Mistral trained with the airoboros dataset!
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/sbN_PCdxO_LV0xpFGA_St.png)
Actual dataset is airoboros 2.2, but it seems to have been replaced on hf with 2.2.1.
Prompt Format:
```
USER: <prompt>
ASSISTANT:
```
TruthfulQA:
```
hf-causal-experimental (pretrained=/home/teknium/dakota/lm-evaluation-harness/airoboros2.2-mistral/,dtype=float16), limit: None, provide_description: False, num_fewshot: 0, batch_size: 8
| Task |Version|Metric|Value | |Stderr|
|-------------|------:|------|-----:|---|-----:|
|truthfulqa_mc| 1|mc1 |0.3562|± |0.0168|
| | |mc2 |0.5217|± |0.0156|
```
Wandb training charts: https://wandb.ai/teknium1/airoboros-mistral-7b/runs/airoboros-mistral-1?workspace=user-teknium1
More info to come |