hugodk-sch's picture
End of training
7e4e2b5 verified
|
raw
history blame
No virus
2.61 kB
metadata
library_name: peft
tags:
  - alignment-handbook
  - trl
  - dpo
  - generated_from_trainer
base_model: norallm/normistral-7b-warm
datasets:
  - hugodk-sch/aftonposten_title_prefs
model-index:
  - name: ap-normistral-7b-align-scan
    results: []

ap-normistral-7b-align-scan

This model is a fine-tuned version of data/ap-normistral-7b-sft-qlora on the hugodk-sch/aftonposten_title_prefs dataset. It achieves the following results on the evaluation set:

  • Loss: 1.7667
  • Rewards/chosen: 0.0466
  • Rewards/rejected: 0.0526
  • Rewards/accuracies: 0.4880
  • Rewards/margins: -0.0060
  • Logps/rejected: -35.9008
  • Logps/chosen: -32.3849
  • Logits/rejected: 98.9812
  • Logits/chosen: 98.9874

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-06
  • train_batch_size: 4
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 8
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss Rewards/chosen Rewards/rejected Rewards/accuracies Rewards/margins Logps/rejected Logps/chosen Logits/rejected Logits/chosen
1.5488 0.26 100 1.7743 -0.0738 -0.1533 0.5378 0.0795 -36.1581 -32.5354 98.8023 98.8147
3.6133 0.52 200 1.8922 -0.0939 -0.1399 0.5166 0.0460 -36.1414 -32.5606 99.0488 99.0652
2.1193 0.78 300 1.5939 0.0537 0.0702 0.5191 -0.0166 -35.8787 -32.3761 98.9855 98.9917

Framework versions

  • PEFT 0.10.0
  • Transformers 4.39.0.dev0
  • Pytorch 2.1.2+cu121
  • Datasets 2.14.6
  • Tokenizers 0.15.1