Update README.md
Browse files
README.md
CHANGED
@@ -118,7 +118,7 @@ You may need *gradient_accumulation* because you need more batch size.
|
|
118 |
| 0.8238 | 11.88 | 1900 | 0.6735 | 0.3297 |
|
119 |
| 0.7618 | 12.5 | 2000 | 0.6728 | 0.3286 |
|
120 |
|
121 |
-
#### Hyperparameter
|
122 |
Several models with differet hyperparameters were trained. The following figures show the training process for three of them.
|
123 |
![wer](wandb-wer.png)
|
124 |
![loss](wandb-loss.png)
|
@@ -180,5 +180,7 @@ Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for m
|
|
180 |
- Datasets 2.15.0
|
181 |
- Tokenizers 0.15.0
|
182 |
|
183 |
-
Contact us 🤝
|
184 |
-
If you have any technical question regarding the model, pretraining, code or publication, please create an issue in the repository. This is the *best* way to reach us.
|
|
|
|
|
|
118 |
| 0.8238 | 11.88 | 1900 | 0.6735 | 0.3297 |
|
119 |
| 0.7618 | 12.5 | 2000 | 0.6728 | 0.3286 |
|
120 |
|
121 |
+
#### Hyperparameter tuning
|
122 |
Several models with differet hyperparameters were trained. The following figures show the training process for three of them.
|
123 |
![wer](wandb-wer.png)
|
124 |
![loss](wandb-loss.png)
|
|
|
180 |
- Datasets 2.15.0
|
181 |
- Tokenizers 0.15.0
|
182 |
|
183 |
+
## Contact us 🤝
|
184 |
+
If you have any technical question regarding the model, pretraining, code or publication, please create an issue in the repository. This is the *best* way to reach us.
|
185 |
+
|
186 |
+
## Citation ↩️
|