dimasik87 commited on
Commit
1b2e1cd
1 Parent(s): c504956

End of training

Browse files
Files changed (2) hide show
  1. README.md +23 -23
  2. adapter_model.bin +1 -1
README.md CHANGED
@@ -41,7 +41,7 @@ early_stopping_patience: null
41
  eval_max_new_tokens: 128
42
  eval_table_size: null
43
  evals_per_epoch: 4
44
- flash_attention: false
45
  fp16: null
46
  fsdp: null
47
  fsdp_config: null
@@ -67,20 +67,20 @@ lr_scheduler: cosine
67
  max_memory:
68
  0: 70GiB
69
  max_steps: 50
70
- micro_batch_size: 2
71
  mlflow_experiment_name: /tmp/5daf839d73ce7025_train_data.json
72
  model_type: AutoModelForCausalLM
73
  num_epochs: 4
74
- optimizer: adamw_bnb_8bit
75
  output_dir: miner_id_24
76
  pad_to_sequence_len: true
77
  resume_from_checkpoint: null
78
  s2_attention: null
79
  sample_packing: false
80
  saves_per_epoch: 4
81
- sequence_len: 1024
82
  strict: false
83
- tf32: false
84
  tokenizer_type: AutoTokenizer
85
  train_on_inputs: false
86
  trust_remote_code: true
@@ -103,7 +103,7 @@ xformers_attention: null
103
 
104
  This model is a fine-tuned version of [unsloth/Hermes-3-Llama-3.1-8B](https://huggingface.co/unsloth/Hermes-3-Llama-3.1-8B) on the None dataset.
105
  It achieves the following results on the evaluation set:
106
- - Loss: nan
107
 
108
  ## Model description
109
 
@@ -123,12 +123,12 @@ More information needed
123
 
124
  The following hyperparameters were used during training:
125
  - learning_rate: 0.0002
126
- - train_batch_size: 2
127
- - eval_batch_size: 2
128
  - seed: 42
129
  - gradient_accumulation_steps: 4
130
- - total_train_batch_size: 8
131
- - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
132
  - lr_scheduler_type: cosine
133
  - lr_scheduler_warmup_steps: 10
134
  - training_steps: 50
@@ -137,19 +137,19 @@ The following hyperparameters were used during training:
137
 
138
  | Training Loss | Epoch | Step | Validation Loss |
139
  |:-------------:|:------:|:----:|:---------------:|
140
- | 0.0 | 0.0008 | 1 | nan |
141
- | 0.0 | 0.0034 | 4 | nan |
142
- | 0.0 | 0.0068 | 8 | nan |
143
- | 0.0 | 0.0101 | 12 | nan |
144
- | 0.0 | 0.0135 | 16 | nan |
145
- | 0.0 | 0.0169 | 20 | nan |
146
- | 0.0 | 0.0203 | 24 | nan |
147
- | 0.0 | 0.0236 | 28 | nan |
148
- | 0.0 | 0.0270 | 32 | nan |
149
- | 0.0 | 0.0304 | 36 | nan |
150
- | 0.0 | 0.0338 | 40 | nan |
151
- | 0.0 | 0.0371 | 44 | nan |
152
- | 0.0 | 0.0405 | 48 | nan |
153
 
154
 
155
  ### Framework versions
 
41
  eval_max_new_tokens: 128
42
  eval_table_size: null
43
  evals_per_epoch: 4
44
+ flash_attention: true
45
  fp16: null
46
  fsdp: null
47
  fsdp_config: null
 
67
  max_memory:
68
  0: 70GiB
69
  max_steps: 50
70
+ micro_batch_size: 1
71
  mlflow_experiment_name: /tmp/5daf839d73ce7025_train_data.json
72
  model_type: AutoModelForCausalLM
73
  num_epochs: 4
74
+ optimizer: adamw_torch
75
  output_dir: miner_id_24
76
  pad_to_sequence_len: true
77
  resume_from_checkpoint: null
78
  s2_attention: null
79
  sample_packing: false
80
  saves_per_epoch: 4
81
+ sequence_len: 2028
82
  strict: false
83
+ tf32: true
84
  tokenizer_type: AutoTokenizer
85
  train_on_inputs: false
86
  trust_remote_code: true
 
103
 
104
  This model is a fine-tuned version of [unsloth/Hermes-3-Llama-3.1-8B](https://huggingface.co/unsloth/Hermes-3-Llama-3.1-8B) on the None dataset.
105
  It achieves the following results on the evaluation set:
106
+ - Loss: 0.2677
107
 
108
  ## Model description
109
 
 
123
 
124
  The following hyperparameters were used during training:
125
  - learning_rate: 0.0002
126
+ - train_batch_size: 1
127
+ - eval_batch_size: 1
128
  - seed: 42
129
  - gradient_accumulation_steps: 4
130
+ - total_train_batch_size: 4
131
+ - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
132
  - lr_scheduler_type: cosine
133
  - lr_scheduler_warmup_steps: 10
134
  - training_steps: 50
 
137
 
138
  | Training Loss | Epoch | Step | Validation Loss |
139
  |:-------------:|:------:|:----:|:---------------:|
140
+ | 6.2351 | 0.0004 | 1 | 7.1638 |
141
+ | 7.51 | 0.0017 | 4 | 6.9215 |
142
+ | 4.7822 | 0.0034 | 8 | 4.4631 |
143
+ | 1.5597 | 0.0051 | 12 | 1.5648 |
144
+ | 0.481 | 0.0068 | 16 | 0.6843 |
145
+ | 0.3143 | 0.0084 | 20 | 0.5307 |
146
+ | 0.1861 | 0.0101 | 24 | 0.4394 |
147
+ | 0.1889 | 0.0118 | 28 | 0.4399 |
148
+ | 0.2487 | 0.0135 | 32 | 0.3397 |
149
+ | 0.4158 | 0.0152 | 36 | 0.3151 |
150
+ | 0.3524 | 0.0169 | 40 | 0.2879 |
151
+ | 0.2272 | 0.0186 | 44 | 0.2739 |
152
+ | 0.3537 | 0.0203 | 48 | 0.2677 |
153
 
154
 
155
  ### Framework versions
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:45fe9746a7db9cd15165b811a5331f0dc4b1d8ec660979d5129957450fdbfea6
3
  size 167934026
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1d7c79d0d9671f73d6001c26eafdbb4a800f679b7926a613b593acadba2a3d67
3
  size 167934026