metadata
license: gemma
base_model: google/gemma-2-27b
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: collapse_gemma-2-27b_hs2_accumulate_iter1_sftsd2
results: []
collapse_gemma-2-27b_hs2_accumulate_iter1_sftsd2
This model is a fine-tuned version of google/gemma-2-27b on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.9056
- Num Input Tokens Seen: 5236236
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-06
- train_batch_size: 4
- eval_batch_size: 16
- seed: 2
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 1
Training results
Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
---|---|---|---|---|
No log | 0 | 0 | 1.1282 | 0 |
0.9652 | 0.0511 | 5 | 0.9817 | 270332 |
0.9707 | 0.1021 | 10 | 0.9530 | 541500 |
0.9592 | 0.1532 | 15 | 0.9413 | 812584 |
0.9301 | 0.2043 | 20 | 0.9341 | 1083956 |
0.9111 | 0.2553 | 25 | 0.9296 | 1356232 |
0.9056 | 0.3064 | 30 | 0.9263 | 1618536 |
0.9533 | 0.3575 | 35 | 0.9235 | 1882576 |
0.926 | 0.4086 | 40 | 0.9205 | 2154872 |
0.8827 | 0.4596 | 45 | 0.9183 | 2423272 |
0.8874 | 0.5107 | 50 | 0.9162 | 2695312 |
0.9546 | 0.5618 | 55 | 0.9150 | 2965956 |
0.8911 | 0.6128 | 60 | 0.9133 | 3236124 |
0.8428 | 0.6639 | 65 | 0.9119 | 3504696 |
0.9158 | 0.7150 | 70 | 0.9108 | 3779560 |
0.9392 | 0.7660 | 75 | 0.9097 | 4047404 |
0.9049 | 0.8171 | 80 | 0.9091 | 4319468 |
0.8697 | 0.8682 | 85 | 0.9082 | 4590728 |
0.9536 | 0.9192 | 90 | 0.9067 | 4860344 |
0.9586 | 0.9703 | 95 | 0.9057 | 5128848 |
Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1