kazuma313 commited on
Commit
2ac4ca5
1 Parent(s): 34fd505

Model save

Browse files
Files changed (1) hide show
  1. README.md +7 -14
README.md CHANGED
@@ -5,7 +5,7 @@ tags:
5
  - trl
6
  - sft
7
  - generated_from_trainer
8
- base_model: google/gemma-7b
9
  model-index:
10
  - name: results
11
  results: []
@@ -16,7 +16,7 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  # results
18
 
19
- This model is a fine-tuned version of [google/gemma-7b](https://huggingface.co/google/gemma-7b) on an unknown dataset.
20
 
21
  ## Model description
22
 
@@ -35,25 +35,18 @@ More information needed
35
  ### Training hyperparameters
36
 
37
  The following hyperparameters were used during training:
38
- - learning_rate: 2e-05
39
- - train_batch_size: 4
40
  - eval_batch_size: 8
41
  - seed: 42
42
- - gradient_accumulation_steps: 4
43
- - total_train_batch_size: 16
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: constant
46
- - lr_scheduler_warmup_ratio: 0.2
47
- - num_epochs: 1
48
- - mixed_precision_training: Native AMP
49
-
50
- ### Training results
51
-
52
-
53
 
54
  ### Framework versions
55
 
56
- - PEFT 0.9.1.dev0
57
  - Transformers 4.38.2
58
  - Pytorch 2.2.1+cu121
59
  - Datasets 2.18.0
 
5
  - trl
6
  - sft
7
  - generated_from_trainer
8
+ base_model: google/gemma-2b
9
  model-index:
10
  - name: results
11
  results: []
 
16
 
17
  # results
18
 
19
+ This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset.
20
 
21
  ## Model description
22
 
 
35
  ### Training hyperparameters
36
 
37
  The following hyperparameters were used during training:
38
+ - learning_rate: 2e-06
39
+ - train_batch_size: 2
40
  - eval_batch_size: 8
41
  - seed: 42
 
 
42
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
  - lr_scheduler_type: constant
44
+ - lr_scheduler_warmup_ratio: 0.03
45
+ - training_steps: 4000
 
 
 
 
 
46
 
47
  ### Framework versions
48
 
49
+ - PEFT 0.10.0
50
  - Transformers 4.38.2
51
  - Pytorch 2.2.1+cu121
52
  - Datasets 2.18.0