Maelstrome commited on
Commit
dfd430c
1 Parent(s): cee6e1c

End of training

Browse files
Files changed (1) hide show
  1. README.md +12 -1
README.md CHANGED
@@ -6,6 +6,8 @@ tags:
6
  - sft
7
  - generated_from_trainer
8
  base_model: google/gemma-2b
 
 
9
  model-index:
10
  - name: gemma-2b-storytelling
11
  results: []
@@ -16,7 +18,9 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  # gemma-2b-storytelling
18
 
19
- This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset.
 
 
20
 
21
  ## Model description
22
 
@@ -46,6 +50,13 @@ The following hyperparameters were used during training:
46
  - lr_scheduler_warmup_ratio: 0.05
47
  - training_steps: 154
48
 
 
 
 
 
 
 
 
49
  ### Framework versions
50
 
51
  - PEFT 0.10.0
 
6
  - sft
7
  - generated_from_trainer
8
  base_model: google/gemma-2b
9
+ datasets:
10
+ - generator
11
  model-index:
12
  - name: gemma-2b-storytelling
13
  results: []
 
18
 
19
  # gemma-2b-storytelling
20
 
21
+ This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on the generator dataset.
22
+ It achieves the following results on the evaluation set:
23
+ - Loss: nan
24
 
25
  ## Model description
26
 
 
50
  - lr_scheduler_warmup_ratio: 0.05
51
  - training_steps: 154
52
 
53
+ ### Training results
54
+
55
+ | Training Loss | Epoch | Step | Validation Loss |
56
+ |:----------------:|:------:|:----:|:---------------:|
57
+ | 1454737970954.24 | 0.9164 | 100 | nan |
58
+
59
+
60
  ### Framework versions
61
 
62
  - PEFT 0.10.0