anushaporwal commited on
Commit
fd9f585
1 Parent(s): aeb6e9b

Model save

Browse files
Files changed (3) hide show
  1. README.md +13 -0
  2. all_results.json +6 -6
  3. train_results.json +6 -6
README.md CHANGED
@@ -3,6 +3,8 @@ license: apache-2.0
3
  base_model: facebook/wav2vec2-large-xlsr-53
4
  tags:
5
  - generated_from_trainer
 
 
6
  model-index:
7
  - name: wav2vec2-common_voice-en-demo-1
8
  results: []
@@ -14,6 +16,9 @@ should probably proofread and complete it, then remove this comment. -->
14
  # wav2vec2-common_voice-en-demo-1
15
 
16
  This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on an unknown dataset.
 
 
 
17
 
18
  ## Model description
19
 
@@ -46,6 +51,14 @@ The following hyperparameters were used during training:
46
 
47
  ### Training results
48
 
 
 
 
 
 
 
 
 
49
 
50
 
51
  ### Framework versions
 
3
  base_model: facebook/wav2vec2-large-xlsr-53
4
  tags:
5
  - generated_from_trainer
6
+ metrics:
7
+ - wer
8
  model-index:
9
  - name: wav2vec2-common_voice-en-demo-1
10
  results: []
 
16
  # wav2vec2-common_voice-en-demo-1
17
 
18
  This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on an unknown dataset.
19
+ It achieves the following results on the evaluation set:
20
+ - Loss: 0.8249
21
+ - Wer: 0.4713
22
 
23
  ## Model description
24
 
 
51
 
52
  ### Training results
53
 
54
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
55
+ |:-------------:|:------:|:----:|:---------------:|:------:|
56
+ | 3.4837 | 0.4566 | 100 | 3.7538 | 1.0 |
57
+ | 2.9754 | 0.9132 | 200 | 3.0914 | 1.0 |
58
+ | 2.8257 | 1.3699 | 300 | 2.8550 | 1.0 |
59
+ | 0.7154 | 1.8265 | 400 | 1.0927 | 0.6141 |
60
+ | 0.5289 | 2.2831 | 500 | 0.9329 | 0.5331 |
61
+ | 0.4774 | 2.7397 | 600 | 0.8249 | 0.4713 |
62
 
63
 
64
  ### Framework versions
all_results.json CHANGED
@@ -1,13 +1,13 @@
1
  {
2
- "epoch": 2.9523809523809526,
3
  "eval_loss": 3.8372464179992676,
4
  "eval_runtime": 14.5961,
5
  "eval_samples_per_second": 17.059,
6
  "eval_steps_per_second": 2.192,
7
  "eval_wer": 1.0,
8
- "total_flos": 5.949068894908372e+17,
9
- "train_loss": 10.035515139179845,
10
- "train_runtime": 219.516,
11
- "train_samples_per_second": 13.666,
12
- "train_steps_per_second": 0.424
13
  }
 
1
  {
2
+ "epoch": 3.0,
3
  "eval_loss": 3.8372464179992676,
4
  "eval_runtime": 14.5961,
5
  "eval_samples_per_second": 17.059,
6
  "eval_steps_per_second": 2.192,
7
  "eval_wer": 1.0,
8
+ "total_flos": 4.0302800820652447e+18,
9
+ "train_loss": 2.8814508167394584,
10
+ "train_runtime": 1313.5019,
11
+ "train_samples_per_second": 15.988,
12
+ "train_steps_per_second": 0.5
13
  }
train_results.json CHANGED
@@ -1,8 +1,8 @@
1
  {
2
- "epoch": 2.9523809523809526,
3
- "total_flos": 5.949068894908372e+17,
4
- "train_loss": 10.035515139179845,
5
- "train_runtime": 219.516,
6
- "train_samples_per_second": 13.666,
7
- "train_steps_per_second": 0.424
8
  }
 
1
  {
2
+ "epoch": 3.0,
3
+ "total_flos": 4.0302800820652447e+18,
4
+ "train_loss": 2.8814508167394584,
5
+ "train_runtime": 1313.5019,
6
+ "train_samples_per_second": 15.988,
7
+ "train_steps_per_second": 0.5
8
  }