nonoJDWAOIDAWKDA commited on
Commit
192529d
1 Parent(s): 33a60ab

End of training

Browse files
Files changed (3) hide show
  1. README.md +1 -19
  2. model.safetensors +1 -1
  3. training_args.bin +1 -1
README.md CHANGED
@@ -15,8 +15,6 @@ should probably proofread and complete it, then remove this comment. -->
15
  # speecht5_finetuned_nono
16
 
17
  This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset.
18
- It achieves the following results on the evaluation set:
19
- - Loss: 0.5475
20
 
21
  ## Model description
22
 
@@ -44,25 +42,9 @@ The following hyperparameters were used during training:
44
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_steps: 100
47
- - training_steps: 1000
48
  - mixed_precision_training: Native AMP
49
 
50
- ### Training results
51
-
52
- | Training Loss | Epoch | Step | Validation Loss |
53
- |:-------------:|:--------:|:----:|:---------------:|
54
- | 0.5131 | 29.6296 | 100 | 0.5124 |
55
- | 0.4586 | 59.2593 | 200 | 0.5242 |
56
- | 0.4348 | 88.8889 | 300 | 0.5369 |
57
- | 0.4093 | 118.5185 | 400 | 0.5420 |
58
- | 0.3839 | 148.1481 | 500 | 0.5394 |
59
- | 0.3788 | 177.7778 | 600 | 0.5430 |
60
- | 0.3686 | 207.4074 | 700 | 0.5504 |
61
- | 0.3606 | 237.0370 | 800 | 0.5518 |
62
- | 0.3555 | 266.6667 | 900 | 0.5524 |
63
- | 0.3538 | 296.2963 | 1000 | 0.5475 |
64
-
65
-
66
  ### Framework versions
67
 
68
  - Transformers 4.46.3
 
15
  # speecht5_finetuned_nono
16
 
17
  This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset.
 
 
18
 
19
  ## Model description
20
 
 
42
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
43
  - lr_scheduler_type: linear
44
  - lr_scheduler_warmup_steps: 100
45
+ - training_steps: 1750
46
  - mixed_precision_training: Native AMP
47
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
48
  ### Framework versions
49
 
50
  - Transformers 4.46.3
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:47d6bcbf1ddada1dba2b8a22095db3c5af56dda407b0ed3f62f9d4b5957f0a71
3
  size 577789320
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b7cff297a53805c09f18f990521bfd425681e61113e619e0b6715dc7eb637f3e
3
  size 577789320
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fb2608a7c7b45df569f4284bf16b58e03c21c6638b9000a4782ac012a3aefd47
3
  size 5432
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:110c3851ec13c65886e9c5a6ebc8d69d78a58d5ec01e2908d3d40ff9577a3dfb
3
  size 5432