OpenELM-1_1B-DPO-full-max-14-reward
This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 1.2507
- Rewards/chosen: -4.0938
- Rewards/rejected: -4.5
- Rewards/accuracies: 0.4961
- Rewards/margins: 0.4277
- Logps/rejected: -740.0
- Logps/chosen: -728.0
- Logits/rejected: -16.125
- Logits/chosen: -16.375
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
Training results
Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
---|---|---|---|---|---|---|---|---|---|---|---|
0.0575 | 0.1047 | 100 | 0.6934 | -1.2422 | -1.5469 | 0.5840 | 0.3027 | -444.0 | -442.0 | -9.75 | -10.0 |
0.0427 | 0.2094 | 200 | 0.7579 | -1.125 | -1.2812 | 0.5039 | 0.1602 | -418.0 | -430.0 | -14.8125 | -15.0 |
0.0532 | 0.3141 | 300 | 1.6151 | -6.3125 | -6.875 | 0.4844 | 0.5820 | -976.0 | -948.0 | -13.75 | -13.9375 |
0.0436 | 0.4188 | 400 | 0.8565 | -1.5625 | -1.6719 | 0.4785 | 0.1069 | -456.0 | -474.0 | -15.5 | -15.5625 |
0.0331 | 0.5236 | 500 | 0.9544 | -2.875 | -3.125 | 0.4863 | 0.2539 | -600.0 | -608.0 | -10.9375 | -11.4375 |
0.043 | 0.6283 | 600 | 0.9331 | -2.7812 | -2.9219 | 0.4551 | 0.1396 | -580.0 | -596.0 | -16.625 | -16.75 |
0.037 | 0.7330 | 700 | 0.8353 | -2.9062 | -3.0781 | 0.5156 | 0.1777 | -596.0 | -608.0 | -12.6875 | -13.0625 |
0.0295 | 0.8377 | 800 | 0.9349 | -2.8438 | -3.0156 | 0.4863 | 0.1611 | -588.0 | -604.0 | -18.25 | -18.375 |
0.0344 | 0.9424 | 900 | 0.9633 | -3.2188 | -3.3281 | 0.4824 | 0.1157 | -624.0 | -640.0 | -14.8125 | -15.25 |
0.0067 | 1.0471 | 1000 | 1.0684 | -3.3438 | -3.6719 | 0.4863 | 0.3281 | -656.0 | -652.0 | -17.5 | -17.75 |
0.0165 | 1.1518 | 1100 | 1.1375 | -3.8906 | -4.3125 | 0.4727 | 0.4082 | -720.0 | -708.0 | -15.9375 | -16.375 |
0.0019 | 1.2565 | 1200 | 0.9505 | -2.875 | -3.0625 | 0.4902 | 0.1973 | -596.0 | -604.0 | -17.25 | -17.25 |
0.0016 | 1.3613 | 1300 | 1.0801 | -3.6562 | -4.0 | 0.4824 | 0.3457 | -688.0 | -684.0 | -13.125 | -13.625 |
0.0015 | 1.4660 | 1400 | 1.0415 | -3.4375 | -3.7969 | 0.5020 | 0.3633 | -668.0 | -660.0 | -14.375 | -14.75 |
0.0037 | 1.5707 | 1500 | 1.0121 | -3.1406 | -3.4375 | 0.5 | 0.2930 | -632.0 | -632.0 | -14.0 | -14.375 |
0.0025 | 1.6754 | 1600 | 1.1921 | -3.8125 | -4.2188 | 0.4902 | 0.3984 | -708.0 | -700.0 | -15.125 | -15.4375 |
0.0029 | 1.7801 | 1700 | 1.4103 | -4.9375 | -5.4375 | 0.4902 | 0.5078 | -832.0 | -812.0 | -12.8125 | -13.1875 |
0.0009 | 1.8848 | 1800 | 1.1241 | -3.7344 | -4.0 | 0.4766 | 0.2852 | -688.0 | -692.0 | -15.6875 | -15.875 |
0.0052 | 1.9895 | 1900 | 1.1658 | -3.6406 | -3.9062 | 0.4688 | 0.2676 | -680.0 | -684.0 | -16.25 | -16.375 |
0.0003 | 2.0942 | 2000 | 1.1422 | -3.7188 | -4.0312 | 0.4785 | 0.3105 | -692.0 | -692.0 | -16.5 | -16.625 |
0.0008 | 2.1990 | 2100 | 1.2501 | -4.0938 | -4.5 | 0.4863 | 0.4043 | -736.0 | -728.0 | -16.125 | -16.375 |
0.0002 | 2.3037 | 2200 | 1.2498 | -4.0625 | -4.4688 | 0.4902 | 0.4180 | -736.0 | -724.0 | -16.25 | -16.375 |
0.0004 | 2.4084 | 2300 | 1.2577 | -4.0938 | -4.5312 | 0.4941 | 0.4258 | -740.0 | -728.0 | -16.25 | -16.375 |
0.0002 | 2.5131 | 2400 | 1.2621 | -4.125 | -4.5625 | 0.4941 | 0.4355 | -744.0 | -732.0 | -16.0 | -16.25 |
0.0001 | 2.6178 | 2500 | 1.2696 | -4.1562 | -4.5938 | 0.4961 | 0.4453 | -748.0 | -736.0 | -15.9375 | -16.125 |
0.0073 | 2.7225 | 2600 | 1.2632 | -4.125 | -4.5625 | 0.5020 | 0.4375 | -744.0 | -732.0 | -16.125 | -16.375 |
0.0002 | 2.8272 | 2700 | 1.2520 | -4.0938 | -4.5 | 0.4922 | 0.4258 | -740.0 | -728.0 | -16.125 | -16.375 |
0.0002 | 2.9319 | 2800 | 1.2507 | -4.0938 | -4.5 | 0.4961 | 0.4277 | -740.0 | -728.0 | -16.125 | -16.375 |
Framework versions
- Transformers 4.44.2
- Pytorch 2.3.0
- Datasets 3.0.0
- Tokenizers 0.19.1
- Downloads last month
- 2
Inference API (serverless) is not available, repository is disabled.