metadata
license: apache-2.0
library_name: peft
tags:
- trl
- dpo
- generated_from_trainer
base_model: TheBloke/OpenHermes-2-Mistral-7B-GPTQ
model-index:
- name: openhermes-mistral-dpo-gptq
results: []
openhermes-mistral-dpo-gptq
This model is a fine-tuned version of TheBloke/OpenHermes-2-Mistral-7B-GPTQ on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.5794
- Rewards/chosen: 0.3240
- Rewards/rejected: -0.1304
- Rewards/accuracies: 0.8125
- Rewards/margins: 0.4544
- Logps/rejected: -257.9009
- Logps/chosen: -299.4479
- Logits/rejected: -2.5816
- Logits/chosen: -2.4558
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- training_steps: 50
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
---|---|---|---|---|---|---|---|---|---|---|---|
0.6803 | 0.005 | 10 | 0.6477 | 0.1422 | -0.0051 | 0.9375 | 0.1472 | -256.6473 | -301.2661 | -2.5813 | -2.4540 |
0.7051 | 0.01 | 20 | 0.6019 | 0.1764 | -0.1259 | 0.9375 | 0.3023 | -257.8556 | -300.9241 | -2.5817 | -2.4542 |
0.7343 | 0.015 | 30 | 0.5848 | 0.1973 | -0.1493 | 0.875 | 0.3465 | -258.0895 | -300.7151 | -2.5833 | -2.4551 |
0.6977 | 0.02 | 40 | 0.5773 | 0.2710 | -0.1295 | 0.8125 | 0.4005 | -257.8920 | -299.9776 | -2.5815 | -2.4542 |
0.6483 | 0.025 | 50 | 0.5794 | 0.3240 | -0.1304 | 0.8125 | 0.4544 | -257.9009 | -299.4479 | -2.5816 | -2.4558 |
Framework versions
- PEFT 0.11.1
- Transformers 4.41.1
- Pytorch 2.0.1+cu117
- Datasets 2.19.1
- Tokenizers 0.19.1