metadata
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
datasets:
- GaetanMichelet/chat-60_ft_task-1
library_name: peft
license: llama3.1
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-31-8B_task-1_60-samples_config-3_full
results: []
Llama-31-8B_task-1_60-samples_config-3_full
This model is a fine-tuned version of meta-llama/Meta-Llama-3.1-8B-Instruct on the GaetanMichelet/chat-60_ft_task-1 dataset. It achieves the following results on the evaluation set:
- Loss: 0.9224
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 150
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
2.5395 | 0.8696 | 5 | 2.4149 |
2.4512 | 1.9130 | 11 | 2.3973 |
2.4419 | 2.9565 | 17 | 2.3721 |
2.3921 | 4.0 | 23 | 2.3361 |
2.3357 | 4.8696 | 28 | 2.2954 |
2.3559 | 5.9130 | 34 | 2.2287 |
2.2622 | 6.9565 | 40 | 2.1654 |
2.186 | 8.0 | 46 | 2.0752 |
2.0842 | 8.8696 | 51 | 2.0000 |
2.0522 | 9.9130 | 57 | 1.8960 |
1.911 | 10.9565 | 63 | 1.7942 |
1.8076 | 12.0 | 69 | 1.6760 |
1.659 | 12.8696 | 74 | 1.5645 |
1.5002 | 13.9130 | 80 | 1.4214 |
1.309 | 14.9565 | 86 | 1.2940 |
1.2079 | 16.0 | 92 | 1.1837 |
1.1738 | 16.8696 | 97 | 1.1230 |
1.0304 | 17.9130 | 103 | 1.0781 |
1.0485 | 18.9565 | 109 | 1.0459 |
0.9687 | 20.0 | 115 | 1.0258 |
0.9883 | 20.8696 | 120 | 1.0147 |
0.974 | 21.9130 | 126 | 1.0013 |
0.9397 | 22.9565 | 132 | 0.9905 |
0.9522 | 24.0 | 138 | 0.9816 |
0.9115 | 24.8696 | 143 | 0.9739 |
0.9412 | 25.9130 | 149 | 0.9668 |
0.9168 | 26.9565 | 155 | 0.9610 |
0.9461 | 28.0 | 161 | 0.9547 |
0.8579 | 28.8696 | 166 | 0.9499 |
0.8857 | 29.9130 | 172 | 0.9454 |
0.8465 | 30.9565 | 178 | 0.9405 |
0.8681 | 32.0 | 184 | 0.9393 |
0.8257 | 32.8696 | 189 | 0.9344 |
0.8425 | 33.9130 | 195 | 0.9336 |
0.8405 | 34.9565 | 201 | 0.9281 |
0.8101 | 36.0 | 207 | 0.9283 |
0.7808 | 36.8696 | 212 | 0.9259 |
0.7971 | 37.9130 | 218 | 0.9259 |
0.7766 | 38.9565 | 224 | 0.9235 |
0.7748 | 40.0 | 230 | 0.9245 |
0.7476 | 40.8696 | 235 | 0.9253 |
0.7007 | 41.9130 | 241 | 0.9224 |
0.741 | 42.9565 | 247 | 0.9261 |
0.7371 | 44.0 | 253 | 0.9239 |
0.7239 | 44.8696 | 258 | 0.9323 |
0.671 | 45.9130 | 264 | 0.9269 |
0.7312 | 46.9565 | 270 | 0.9333 |
0.6826 | 48.0 | 276 | 0.9345 |
0.6472 | 48.8696 | 281 | 0.9393 |
Framework versions
- PEFT 0.12.0
- Transformers 4.44.0
- Pytorch 2.1.2+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1