golesheed commited on
Commit
79b69be
1 Parent(s): 33dd4f4

End of training

Browse files
README.md CHANGED
@@ -1,199 +1,73 @@
1
  ---
2
  library_name: transformers
3
- tags: []
 
 
 
 
 
 
 
 
 
 
4
  ---
5
 
6
- # Model Card for Model ID
 
7
 
8
- <!-- Provide a quick summary of what the model is/does. -->
9
 
 
 
 
 
10
 
 
11
 
12
- ## Model Details
13
 
14
- ### Model Description
15
 
16
- <!-- Provide a longer summary of what this model is. -->
17
 
18
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
 
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
 
28
- ### Model Sources [optional]
29
 
30
- <!-- Provide the basic links for the model. -->
31
 
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
 
 
 
 
 
 
35
 
36
- ## Uses
37
 
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
 
 
 
 
 
 
 
 
 
 
 
39
 
40
- ### Direct Use
41
 
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
 
44
- [More Information Needed]
45
-
46
- ### Downstream Use [optional]
47
-
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
-
50
- [More Information Needed]
51
-
52
- ### Out-of-Scope Use
53
-
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
-
56
- [More Information Needed]
57
-
58
- ## Bias, Risks, and Limitations
59
-
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
-
62
- [More Information Needed]
63
-
64
- ### Recommendations
65
-
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
-
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
-
70
- ## How to Get Started with the Model
71
-
72
- Use the code below to get started with the model.
73
-
74
- [More Information Needed]
75
-
76
- ## Training Details
77
-
78
- ### Training Data
79
-
80
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
-
82
- [More Information Needed]
83
-
84
- ### Training Procedure
85
-
86
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
-
88
- #### Preprocessing [optional]
89
-
90
- [More Information Needed]
91
-
92
-
93
- #### Training Hyperparameters
94
-
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
-
97
- #### Speeds, Sizes, Times [optional]
98
-
99
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
-
101
- [More Information Needed]
102
-
103
- ## Evaluation
104
-
105
- <!-- This section describes the evaluation protocols and provides the results. -->
106
-
107
- ### Testing Data, Factors & Metrics
108
-
109
- #### Testing Data
110
-
111
- <!-- This should link to a Dataset Card if possible. -->
112
-
113
- [More Information Needed]
114
-
115
- #### Factors
116
-
117
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
-
119
- [More Information Needed]
120
-
121
- #### Metrics
122
-
123
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
-
125
- [More Information Needed]
126
-
127
- ### Results
128
-
129
- [More Information Needed]
130
-
131
- #### Summary
132
-
133
-
134
-
135
- ## Model Examination [optional]
136
-
137
- <!-- Relevant interpretability work for the model goes here -->
138
-
139
- [More Information Needed]
140
-
141
- ## Environmental Impact
142
-
143
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
-
145
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
-
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
-
153
- ## Technical Specifications [optional]
154
-
155
- ### Model Architecture and Objective
156
-
157
- [More Information Needed]
158
-
159
- ### Compute Infrastructure
160
-
161
- [More Information Needed]
162
-
163
- #### Hardware
164
-
165
- [More Information Needed]
166
-
167
- #### Software
168
-
169
- [More Information Needed]
170
-
171
- ## Citation [optional]
172
-
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
-
175
- **BibTeX:**
176
-
177
- [More Information Needed]
178
-
179
- **APA:**
180
-
181
- [More Information Needed]
182
-
183
- ## Glossary [optional]
184
-
185
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
-
187
- [More Information Needed]
188
-
189
- ## More Information [optional]
190
-
191
- [More Information Needed]
192
-
193
- ## Model Card Authors [optional]
194
-
195
- [More Information Needed]
196
-
197
- ## Model Card Contact
198
-
199
- [More Information Needed]
 
1
  ---
2
  library_name: transformers
3
+ language:
4
+ - nl
5
+ license: apache-2.0
6
+ base_model: openai/whisper-large-v2
7
+ tags:
8
+ - generated_from_trainer
9
+ metrics:
10
+ - wer
11
+ model-index:
12
+ - name: Whisper Large V2
13
+ results: []
14
  ---
15
 
16
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
+ should probably proofread and complete it, then remove this comment. -->
18
 
19
+ # Whisper Large V2
20
 
21
+ This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the None dataset.
22
+ It achieves the following results on the evaluation set:
23
+ - Loss: 0.2953
24
+ - Wer: 11.3276
25
 
26
+ ## Model description
27
 
28
+ More information needed
29
 
30
+ ## Intended uses & limitations
31
 
32
+ More information needed
33
 
34
+ ## Training and evaluation data
35
 
36
+ More information needed
 
 
 
 
 
 
37
 
38
+ ## Training procedure
39
 
40
+ ### Training hyperparameters
41
 
42
+ The following hyperparameters were used during training:
43
+ - learning_rate: 3e-05
44
+ - train_batch_size: 12
45
+ - eval_batch_size: 8
46
+ - seed: 42
47
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
+ - lr_scheduler_type: linear
49
+ - lr_scheduler_warmup_steps: 20
50
+ - num_epochs: 5
51
 
52
+ ### Training results
53
 
54
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
55
+ |:-------------:|:------:|:----:|:---------------:|:-------:|
56
+ | 0.5452 | 0.4839 | 15 | 0.3714 | 23.2724 |
57
+ | 0.2911 | 0.9677 | 30 | 0.2866 | 18.6494 |
58
+ | 0.1304 | 1.4516 | 45 | 0.2713 | 13.6270 |
59
+ | 0.1196 | 1.9355 | 60 | 0.2595 | 12.7436 |
60
+ | 0.0595 | 2.4194 | 75 | 0.2615 | 11.8964 |
61
+ | 0.043 | 2.9032 | 90 | 0.2700 | 13.0098 |
62
+ | 0.0229 | 3.3871 | 105 | 0.2854 | 15.4786 |
63
+ | 0.0176 | 3.8710 | 120 | 0.2747 | 12.9856 |
64
+ | 0.0101 | 4.3548 | 135 | 0.2882 | 11.1340 |
65
+ | 0.0069 | 4.8387 | 150 | 0.2953 | 11.3276 |
66
 
 
67
 
68
+ ### Framework versions
69
 
70
+ - Transformers 4.45.0.dev0
71
+ - Pytorch 2.1.0+cu121
72
+ - Datasets 2.20.0
73
+ - Tokenizers 0.19.1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
generation_config.json ADDED
@@ -0,0 +1,227 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alignment_heads": [
3
+ [
4
+ 10,
5
+ 12
6
+ ],
7
+ [
8
+ 13,
9
+ 17
10
+ ],
11
+ [
12
+ 16,
13
+ 11
14
+ ],
15
+ [
16
+ 16,
17
+ 12
18
+ ],
19
+ [
20
+ 16,
21
+ 13
22
+ ],
23
+ [
24
+ 17,
25
+ 15
26
+ ],
27
+ [
28
+ 17,
29
+ 16
30
+ ],
31
+ [
32
+ 18,
33
+ 4
34
+ ],
35
+ [
36
+ 18,
37
+ 11
38
+ ],
39
+ [
40
+ 18,
41
+ 19
42
+ ],
43
+ [
44
+ 19,
45
+ 11
46
+ ],
47
+ [
48
+ 21,
49
+ 2
50
+ ],
51
+ [
52
+ 21,
53
+ 3
54
+ ],
55
+ [
56
+ 22,
57
+ 3
58
+ ],
59
+ [
60
+ 22,
61
+ 9
62
+ ],
63
+ [
64
+ 22,
65
+ 12
66
+ ],
67
+ [
68
+ 23,
69
+ 5
70
+ ],
71
+ [
72
+ 23,
73
+ 7
74
+ ],
75
+ [
76
+ 23,
77
+ 13
78
+ ],
79
+ [
80
+ 25,
81
+ 5
82
+ ],
83
+ [
84
+ 26,
85
+ 1
86
+ ],
87
+ [
88
+ 26,
89
+ 12
90
+ ],
91
+ [
92
+ 27,
93
+ 15
94
+ ]
95
+ ],
96
+ "begin_suppress_tokens": [
97
+ 220,
98
+ 50257
99
+ ],
100
+ "bos_token_id": 50257,
101
+ "decoder_start_token_id": 50258,
102
+ "eos_token_id": 50257,
103
+ "forced_decoder_ids": [
104
+ [
105
+ 1,
106
+ null
107
+ ],
108
+ [
109
+ 2,
110
+ 50359
111
+ ]
112
+ ],
113
+ "is_multilingual": true,
114
+ "lang_to_id": {
115
+ "<|af|>": 50327,
116
+ "<|am|>": 50334,
117
+ "<|ar|>": 50272,
118
+ "<|as|>": 50350,
119
+ "<|az|>": 50304,
120
+ "<|ba|>": 50355,
121
+ "<|be|>": 50330,
122
+ "<|bg|>": 50292,
123
+ "<|bn|>": 50302,
124
+ "<|bo|>": 50347,
125
+ "<|br|>": 50309,
126
+ "<|bs|>": 50315,
127
+ "<|ca|>": 50270,
128
+ "<|cs|>": 50283,
129
+ "<|cy|>": 50297,
130
+ "<|da|>": 50285,
131
+ "<|de|>": 50261,
132
+ "<|el|>": 50281,
133
+ "<|en|>": 50259,
134
+ "<|es|>": 50262,
135
+ "<|et|>": 50307,
136
+ "<|eu|>": 50310,
137
+ "<|fa|>": 50300,
138
+ "<|fi|>": 50277,
139
+ "<|fo|>": 50338,
140
+ "<|fr|>": 50265,
141
+ "<|gl|>": 50319,
142
+ "<|gu|>": 50333,
143
+ "<|haw|>": 50352,
144
+ "<|ha|>": 50354,
145
+ "<|he|>": 50279,
146
+ "<|hi|>": 50276,
147
+ "<|hr|>": 50291,
148
+ "<|ht|>": 50339,
149
+ "<|hu|>": 50286,
150
+ "<|hy|>": 50312,
151
+ "<|id|>": 50275,
152
+ "<|is|>": 50311,
153
+ "<|it|>": 50274,
154
+ "<|ja|>": 50266,
155
+ "<|jw|>": 50356,
156
+ "<|ka|>": 50329,
157
+ "<|kk|>": 50316,
158
+ "<|km|>": 50323,
159
+ "<|kn|>": 50306,
160
+ "<|ko|>": 50264,
161
+ "<|la|>": 50294,
162
+ "<|lb|>": 50345,
163
+ "<|ln|>": 50353,
164
+ "<|lo|>": 50336,
165
+ "<|lt|>": 50293,
166
+ "<|lv|>": 50301,
167
+ "<|mg|>": 50349,
168
+ "<|mi|>": 50295,
169
+ "<|mk|>": 50308,
170
+ "<|ml|>": 50296,
171
+ "<|mn|>": 50314,
172
+ "<|mr|>": 50320,
173
+ "<|ms|>": 50282,
174
+ "<|mt|>": 50343,
175
+ "<|my|>": 50346,
176
+ "<|ne|>": 50313,
177
+ "<|nl|>": 50271,
178
+ "<|nn|>": 50342,
179
+ "<|no|>": 50288,
180
+ "<|oc|>": 50328,
181
+ "<|pa|>": 50321,
182
+ "<|pl|>": 50269,
183
+ "<|ps|>": 50340,
184
+ "<|pt|>": 50267,
185
+ "<|ro|>": 50284,
186
+ "<|ru|>": 50263,
187
+ "<|sa|>": 50344,
188
+ "<|sd|>": 50332,
189
+ "<|si|>": 50322,
190
+ "<|sk|>": 50298,
191
+ "<|sl|>": 50305,
192
+ "<|sn|>": 50324,
193
+ "<|so|>": 50326,
194
+ "<|sq|>": 50317,
195
+ "<|sr|>": 50303,
196
+ "<|su|>": 50357,
197
+ "<|sv|>": 50273,
198
+ "<|sw|>": 50318,
199
+ "<|ta|>": 50287,
200
+ "<|te|>": 50299,
201
+ "<|tg|>": 50331,
202
+ "<|th|>": 50289,
203
+ "<|tk|>": 50341,
204
+ "<|tl|>": 50348,
205
+ "<|tr|>": 50268,
206
+ "<|tt|>": 50351,
207
+ "<|uk|>": 50280,
208
+ "<|ur|>": 50290,
209
+ "<|uz|>": 50337,
210
+ "<|vi|>": 50278,
211
+ "<|yi|>": 50335,
212
+ "<|yo|>": 50325,
213
+ "<|zh|>": 50260
214
+ },
215
+ "max_initial_timestamp_index": 50,
216
+ "max_length": 448,
217
+ "no_timestamps_token_id": 50363,
218
+ "pad_token_id": 50257,
219
+ "prev_sot_token_id": 50361,
220
+ "return_timestamps": false,
221
+ "suppress_tokens": [],
222
+ "task_to_id": {
223
+ "transcribe": 50359,
224
+ "translate": 50358
225
+ },
226
+ "transformers_version": "4.45.0.dev0"
227
+ }
model-00001-of-00002.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:caf36d50c36c192176c73d979f5166b34d9a726cba728f378273e0924487c8b0
3
  size 4992706480
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:789cbf59d995e30bfc72747a624ddf4c639508c6e6a6ff6a7187e3051fb3dc55
3
  size 4992706480
model-00002-of-00002.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f648ff24679a549a00e00ea43818d6e6c4a8bb1e5c0f03bfc9381a58ffb61f88
3
  size 1180663192
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7a493f214315a09ff71434a340a3f7e9bf6665fb2d22dc394a05ebfed77ff0bb
3
  size 1180663192
runs/Sep23_09-00-50_gcn15.local.snellius.surf.nl/events.out.tfevents.1727074936.gcn15.local.snellius.surf.nl CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9a6a29712263c3d48872fd7b92586d0d9bec81c55ecf6873eb932b70a7a26c74
3
- size 11107
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4efad8b4646c0b7aa3bb6c39b0e45372ba86178a7197f326cab993d8e8dacc04
3
+ size 11461