Sercan commited on
Commit
de67b7b
1 Parent(s): 8cadefb

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +118 -0
README.md ADDED
@@ -0,0 +1,118 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - generated_from_trainer
5
+ datasets:
6
+ - common_voice_11_0
7
+ metrics:
8
+ - wer
9
+ model-index:
10
+ - name: wav2vec2-xls-r-300m-tr
11
+ results:
12
+ - task:
13
+ name: Automatic Speech Recognition
14
+ type: automatic-speech-recognition
15
+ dataset:
16
+ name: common_voice_11_0
17
+ type: common_voice_11_0
18
+ config: tr
19
+ split: test
20
+ args: tr
21
+ metrics:
22
+ - name: Wer
23
+ type: wer
24
+ value: 0.28665152662124654
25
+ ---
26
+
27
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
28
+ should probably proofread and complete it, then remove this comment. -->
29
+
30
+ # wav2vec2-xls-r-300m-tr
31
+
32
+ This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_11_0 dataset.
33
+ It achieves the following results on the evaluation set:
34
+ - Loss: 0.3184
35
+ - Wer: 0.2867
36
+ - Cer: 0.0681
37
+
38
+ ## Model description
39
+
40
+ More information needed
41
+
42
+ ## Intended uses & limitations
43
+
44
+ More information needed
45
+
46
+ ## Training and evaluation data
47
+
48
+ More information needed
49
+
50
+ ## Training procedure
51
+
52
+ ### Training hyperparameters
53
+
54
+ The following hyperparameters were used during training:
55
+ - learning_rate: 0.0003
56
+ - train_batch_size: 64
57
+ - eval_batch_size: 8
58
+ - seed: 42
59
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
60
+ - lr_scheduler_type: linear
61
+ - lr_scheduler_warmup_steps: 500
62
+ - num_epochs: 30.0
63
+ - mixed_precision_training: Native AMP
64
+
65
+ ### Training results
66
+
67
+ | Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
68
+ |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
69
+ | No log | 0.71 | 400 | 1.7290 | 0.9804 | 0.4797 |
70
+ | 4.5435 | 1.42 | 800 | 0.4810 | 0.5774 | 0.1450 |
71
+ | 0.523 | 2.12 | 1200 | 0.3859 | 0.4812 | 0.1156 |
72
+ | 0.3449 | 2.83 | 1600 | 0.3492 | 0.4498 | 0.1095 |
73
+ | 0.2814 | 3.54 | 2000 | 0.3660 | 0.4466 | 0.1099 |
74
+ | 0.2814 | 4.25 | 2400 | 0.3766 | 0.4235 | 0.1043 |
75
+ | 0.2463 | 4.96 | 2800 | 0.3416 | 0.4119 | 0.1010 |
76
+ | 0.2296 | 5.66 | 3200 | 0.3322 | 0.4013 | 0.0979 |
77
+ | 0.2143 | 6.37 | 3600 | 0.3370 | 0.3956 | 0.0972 |
78
+ | 0.1955 | 7.08 | 4000 | 0.3401 | 0.4033 | 0.0998 |
79
+ | 0.1955 | 7.79 | 4400 | 0.3375 | 0.3889 | 0.0962 |
80
+ | 0.1845 | 8.5 | 4800 | 0.3455 | 0.3752 | 0.0923 |
81
+ | 0.1752 | 9.2 | 5200 | 0.3336 | 0.3718 | 0.0925 |
82
+ | 0.1705 | 9.91 | 5600 | 0.3145 | 0.3653 | 0.0892 |
83
+ | 0.1585 | 10.62 | 6000 | 0.3410 | 0.3737 | 0.0922 |
84
+ | 0.1585 | 11.33 | 6400 | 0.3296 | 0.3664 | 0.0899 |
85
+ | 0.1474 | 12.04 | 6800 | 0.3492 | 0.3590 | 0.0899 |
86
+ | 0.1485 | 12.74 | 7200 | 0.3176 | 0.3506 | 0.0867 |
87
+ | 0.137 | 13.45 | 7600 | 0.3532 | 0.3600 | 0.0890 |
88
+ | 0.1291 | 14.16 | 8000 | 0.3318 | 0.3571 | 0.0873 |
89
+ | 0.1291 | 14.87 | 8400 | 0.3353 | 0.3548 | 0.0883 |
90
+ | 0.1274 | 15.58 | 8800 | 0.3235 | 0.3396 | 0.0823 |
91
+ | 0.1198 | 16.28 | 9200 | 0.3259 | 0.3389 | 0.0832 |
92
+ | 0.1164 | 16.99 | 9600 | 0.3263 | 0.3411 | 0.0844 |
93
+ | 0.1119 | 17.7 | 10000 | 0.3254 | 0.3377 | 0.0824 |
94
+ | 0.1119 | 18.41 | 10400 | 0.3243 | 0.3331 | 0.0812 |
95
+ | 0.1054 | 19.12 | 10800 | 0.3223 | 0.3239 | 0.0790 |
96
+ | 0.1017 | 19.82 | 11200 | 0.3054 | 0.3190 | 0.0774 |
97
+ | 0.0964 | 20.53 | 11600 | 0.3278 | 0.3237 | 0.0785 |
98
+ | 0.0903 | 21.24 | 12000 | 0.3167 | 0.3177 | 0.0774 |
99
+ | 0.0903 | 21.95 | 12400 | 0.3331 | 0.3124 | 0.0766 |
100
+ | 0.0886 | 22.65 | 12800 | 0.3099 | 0.3089 | 0.0745 |
101
+ | 0.0836 | 23.36 | 13200 | 0.3171 | 0.3048 | 0.0731 |
102
+ | 0.0796 | 24.07 | 13600 | 0.3158 | 0.3041 | 0.0733 |
103
+ | 0.0739 | 24.78 | 14000 | 0.3203 | 0.3003 | 0.0721 |
104
+ | 0.0739 | 25.49 | 14400 | 0.3138 | 0.2974 | 0.0713 |
105
+ | 0.0742 | 26.19 | 14800 | 0.3197 | 0.2959 | 0.0711 |
106
+ | 0.07 | 26.9 | 15200 | 0.3232 | 0.2952 | 0.0703 |
107
+ | 0.0654 | 27.61 | 15600 | 0.3243 | 0.2939 | 0.0701 |
108
+ | 0.0631 | 28.32 | 16000 | 0.3213 | 0.2876 | 0.0688 |
109
+ | 0.0631 | 29.03 | 16400 | 0.3151 | 0.2880 | 0.0685 |
110
+ | 0.0607 | 29.73 | 16800 | 0.3184 | 0.2867 | 0.0681 |
111
+
112
+
113
+ ### Framework versions
114
+
115
+ - Transformers 4.26.0.dev0
116
+ - Pytorch 1.13.1+cu117
117
+ - Datasets 2.8.1.dev0
118
+ - Tokenizers 0.13.2