odelz commited on
Commit
b50b481
1 Parent(s): 1a4d183

End of training

Browse files
README.md ADDED
@@ -0,0 +1,107 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ base_model: facebook/mms-1b-all
4
+ tags:
5
+ - generated_from_trainer
6
+ datasets:
7
+ - audiofolder
8
+ metrics:
9
+ - wer
10
+ model-index:
11
+ - name: hindi_fb1mms_timebalancedreg
12
+ results:
13
+ - task:
14
+ name: Automatic Speech Recognition
15
+ type: automatic-speech-recognition
16
+ dataset:
17
+ name: audiofolder
18
+ type: audiofolder
19
+ config: default
20
+ split: train
21
+ args: default
22
+ metrics:
23
+ - name: Wer
24
+ type: wer
25
+ value: 0.4259275985404097
26
+ ---
27
+
28
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
29
+ should probably proofread and complete it, then remove this comment. -->
30
+
31
+ # hindi_fb1mms_timebalancedreg
32
+
33
+ This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the audiofolder dataset.
34
+ It achieves the following results on the evaluation set:
35
+ - Loss: 0.7182
36
+ - Wer: 0.4259
37
+
38
+ ## Model description
39
+
40
+ More information needed
41
+
42
+ ## Intended uses & limitations
43
+
44
+ More information needed
45
+
46
+ ## Training and evaluation data
47
+
48
+ More information needed
49
+
50
+ ## Training procedure
51
+
52
+ ### Training hyperparameters
53
+
54
+ The following hyperparameters were used during training:
55
+ - learning_rate: 0.0002
56
+ - train_batch_size: 16
57
+ - eval_batch_size: 8
58
+ - seed: 42
59
+ - gradient_accumulation_steps: 2
60
+ - total_train_batch_size: 32
61
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
+ - lr_scheduler_type: linear
63
+ - lr_scheduler_warmup_steps: 100
64
+ - num_epochs: 30
65
+ - mixed_precision_training: Native AMP
66
+
67
+ ### Training results
68
+
69
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
70
+ |:-------------:|:-------:|:-----:|:---------------:|:------:|
71
+ | 4.087 | 1.0191 | 400 | 3.5884 | 0.9998 |
72
+ | 3.935 | 2.0382 | 800 | 3.4190 | 0.9959 |
73
+ | 3.3712 | 3.0573 | 1200 | 3.3003 | 0.9709 |
74
+ | 3.2027 | 4.0764 | 1600 | 2.8687 | 0.9861 |
75
+ | 1.4667 | 5.0955 | 2000 | 0.6547 | 0.4129 |
76
+ | 1.2468 | 6.1146 | 2400 | 0.6031 | 0.3955 |
77
+ | 1.2401 | 7.1338 | 2800 | 0.6334 | 0.4172 |
78
+ | 1.2952 | 8.1529 | 3200 | 0.6857 | 0.4238 |
79
+ | 1.2466 | 9.1720 | 3600 | 0.7279 | 0.4361 |
80
+ | 1.2094 | 10.1911 | 4000 | 0.6768 | 0.4140 |
81
+ | 1.1764 | 11.2102 | 4400 | 0.6735 | 0.4234 |
82
+ | 1.1491 | 12.2293 | 4800 | 0.7047 | 0.4334 |
83
+ | 1.1504 | 13.2484 | 5200 | 0.6704 | 0.4215 |
84
+ | 1.1656 | 14.2675 | 5600 | 0.6684 | 0.4207 |
85
+ | 1.1666 | 15.2866 | 6000 | 0.7367 | 0.4339 |
86
+ | 1.1512 | 16.3057 | 6400 | 0.7384 | 0.4386 |
87
+ | 1.1646 | 17.3248 | 6800 | 0.7087 | 0.4251 |
88
+ | 1.1407 | 18.3439 | 7200 | 0.7192 | 0.4329 |
89
+ | 1.1207 | 19.3631 | 7600 | 0.7141 | 0.4236 |
90
+ | 1.1145 | 20.3822 | 8000 | 0.7503 | 0.4374 |
91
+ | 1.1138 | 21.4013 | 8400 | 0.7235 | 0.4278 |
92
+ | 1.1091 | 22.4204 | 8800 | 0.7468 | 0.4404 |
93
+ | 1.1255 | 23.4395 | 9200 | 0.7177 | 0.4264 |
94
+ | 1.0959 | 24.4586 | 9600 | 0.7050 | 0.4191 |
95
+ | 1.106 | 25.4777 | 10000 | 0.7420 | 0.4337 |
96
+ | 1.0949 | 26.4968 | 10400 | 0.7063 | 0.4223 |
97
+ | 1.1142 | 27.5159 | 10800 | 0.7170 | 0.4257 |
98
+ | 1.1076 | 28.5350 | 11200 | 0.7223 | 0.4267 |
99
+ | 1.1028 | 29.5541 | 11600 | 0.7182 | 0.4259 |
100
+
101
+
102
+ ### Framework versions
103
+
104
+ - Transformers 4.41.2
105
+ - Pytorch 2.3.1+cu121
106
+ - Datasets 2.19.2
107
+ - Tokenizers 0.19.1
adapter.hin.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:72396edd9bcddac21dc85277a2850e7d685286b38667c4e3d3e1838dd8541233
3
+ size 8947144
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a96d1db0a040730cb4d090dc927f4855e73e76f6d397a4b8600a340dce7f3b9d
3
  size 3859039520
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:befc44d2c976d921caa72f04f4272fdeed37557eec294f7b9bb5ab7600d7ea26
3
  size 3859039520
runs/Jun29_20-27-31_n-62-18-12/events.out.tfevents.1719686099.n-62-18-12.12894.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a627d01c9e5de944f5fcb4a878f1c636e17a29a8e232f7ed6e57177f36e1d65d
3
- size 39933
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bb0fe254bf828e6a2f0b5690bd9ef61856d3f9d7280a6326a800ceb3f80607f4
3
+ size 40498