mapau commited on
Commit
1af0b54
1 Parent(s): 8e09784

End of training

Browse files
Files changed (1) hide show
  1. README.md +9 -7
README.md CHANGED
@@ -24,7 +24,7 @@ model-index:
24
  metrics:
25
  - name: Wer
26
  type: wer
27
- value: 25.31486146095718
28
  ---
29
 
30
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -34,8 +34,8 @@ should probably proofread and complete it, then remove this comment. -->
34
 
35
  This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the parlaSmall_subset dataset.
36
  It achieves the following results on the evaluation set:
37
- - Loss: 0.4923
38
- - Wer: 25.3149
39
 
40
  ## Model description
41
 
@@ -55,22 +55,24 @@ More information needed
55
 
56
  The following hyperparameters were used during training:
57
  - learning_rate: 1e-05
58
- - train_batch_size: 2
59
  - eval_batch_size: 8
60
  - seed: 42
61
- - gradient_accumulation_steps: 8
62
  - total_train_batch_size: 16
63
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
64
  - lr_scheduler_type: linear
65
  - lr_scheduler_warmup_steps: 500
66
- - training_steps: 1000
67
  - mixed_precision_training: Native AMP
68
 
69
  ### Training results
70
 
71
  | Training Loss | Epoch | Step | Validation Loss | Wer |
72
  |:-------------:|:-----:|:----:|:---------------:|:-------:|
73
- | 0.0004 | 32.52 | 1000 | 0.4923 | 25.3149 |
 
 
74
 
75
 
76
  ### Framework versions
 
24
  metrics:
25
  - name: Wer
26
  type: wer
27
+ value: 25.56675062972292
28
  ---
29
 
30
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
34
 
35
  This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the parlaSmall_subset dataset.
36
  It achieves the following results on the evaluation set:
37
+ - Loss: 0.5479
38
+ - Wer: 25.5668
39
 
40
  ## Model description
41
 
 
55
 
56
  The following hyperparameters were used during training:
57
  - learning_rate: 1e-05
58
+ - train_batch_size: 1
59
  - eval_batch_size: 8
60
  - seed: 42
61
+ - gradient_accumulation_steps: 16
62
  - total_train_batch_size: 16
63
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
64
  - lr_scheduler_type: linear
65
  - lr_scheduler_warmup_steps: 500
66
+ - training_steps: 3000
67
  - mixed_precision_training: Native AMP
68
 
69
  ### Training results
70
 
71
  | Training Loss | Epoch | Step | Validation Loss | Wer |
72
  |:-------------:|:-----:|:----:|:---------------:|:-------:|
73
+ | 0.0003 | 32.52 | 1000 | 0.5035 | 26.3224 |
74
+ | 0.0001 | 65.04 | 2000 | 0.5380 | 25.8186 |
75
+ | 0.0001 | 97.56 | 3000 | 0.5479 | 25.5668 |
76
 
77
 
78
  ### Framework versions