lnxdx commited on
Commit
c8795a2
1 Parent(s): 83680d4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -119,7 +119,7 @@ The following hyperparameters were used during training:
119
  Several models with differet hyperparameters were trained. The following figures show the training process for three of them.
120
  ![wer](wandb-wer.png)
121
  ![loss](wandb-loss.png)
122
- '20_2000_1e-5_hp-mehrdad' is the current model and it's hyperparameters are:
123
  ```python
124
  model = Wav2Vec2ForCTC.from_pretrained(
125
  model_name_or_path if not last_checkpoint else last_checkpoint,
@@ -133,7 +133,7 @@ model = Wav2Vec2ForCTC.from_pretrained(
133
  ctc_zero_infinity = True,
134
  )
135
  ```
136
- The hyperparameters of '19_2000_1e-5_hp-base' are:
137
  ```python
138
  model = Wav2Vec2ForCTC.from_pretrained(
139
  model_name_or_path if not last_checkpoint else last_checkpoint,
@@ -148,7 +148,7 @@ model = Wav2Vec2ForCTC.from_pretrained(
148
  )
149
  ```
150
 
151
- And the hyperparameters of '22_2000_1e-5_hp-masoud' are:
152
  ```python
153
  model = Wav2Vec2ForCTC.from_pretrained(
154
  model_name_or_path if not last_checkpoint else last_checkpoint,
@@ -163,6 +163,7 @@ model = Wav2Vec2ForCTC.from_pretrained(
163
  )
164
  ```
165
  Learning rate is 1e-5 for all three models.
 
166
  As you can see this model performs better with WER metric on validation(evaluation) set.
167
 
168
  #### Framework versions
 
119
  Several models with differet hyperparameters were trained. The following figures show the training process for three of them.
120
  ![wer](wandb-wer.png)
121
  ![loss](wandb-loss.png)
122
+ **20_2000_1e-5_hp-mehrdad** is the current model (lnxdx/Wav2Vec2-Large-XLSR-Persian-ShEMO) and it's hyperparameters are:
123
  ```python
124
  model = Wav2Vec2ForCTC.from_pretrained(
125
  model_name_or_path if not last_checkpoint else last_checkpoint,
 
133
  ctc_zero_infinity = True,
134
  )
135
  ```
136
+ The hyperparameters of **19_2000_1e-5_hp-base** are:
137
  ```python
138
  model = Wav2Vec2ForCTC.from_pretrained(
139
  model_name_or_path if not last_checkpoint else last_checkpoint,
 
148
  )
149
  ```
150
 
151
+ And the hyperparameters of **22_2000_1e-5_hp-masoud** are:
152
  ```python
153
  model = Wav2Vec2ForCTC.from_pretrained(
154
  model_name_or_path if not last_checkpoint else last_checkpoint,
 
163
  )
164
  ```
165
  Learning rate is 1e-5 for all three models.
166
+
167
  As you can see this model performs better with WER metric on validation(evaluation) set.
168
 
169
  #### Framework versions