carlosdanielhernandezmena commited on
Commit
8f79863
1 Parent(s): aa328b5

Adding information to the Readme file

Browse files

This is the information of the model card that I wrote in my local machine. I hope not to be in the need of adding more information or more make changes in the future.

Files changed (1) hide show
  1. README.md +130 -0
README.md CHANGED
@@ -1,3 +1,133 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
2
  license: cc-by-4.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language: mt
3
+ datasets:
4
+ - common_voice
5
+ tags:
6
+ - audio
7
+ - automatic-speech-recognition
8
+ - maltese
9
+ - xlrs-53-maltese
10
+ - masri-project
11
+ - malta
12
+ - university-of-malta
13
  license: cc-by-4.0
14
+ widget:
15
+ model-index:
16
+ - name: wav2vec2-large-xlsr-53-maltese-64h
17
+ results:
18
+ - task:
19
+ name: Automatic Speech Recognition
20
+ type: automatic-speech-recognition
21
+ dataset:
22
+ name: Common Voice mt
23
+ type: common_voice
24
+ split: test
25
+ args:
26
+ language: mt
27
+ metrics:
28
+ - name: Test WER
29
+ type: wer
30
+ value: 0.011
31
  ---
32
+
33
+ # wav2vec2-large-xlsr-53-maltese-64h
34
+
35
+ The "wav2vec2-large-xlsr-53-maltese-64h" is an acoustic model suitable for Automatic Speech Recognition in Maltese. It is the result of fine-tuning the model "facebook/wav2vec2-large-xlsr-53" with around 64 hours of Maltese data developed by the MASRI Project at the University of Malta between 2019 and 2021. Most of the data is available at the the MASRI Project homepage https://www.um.edu.mt/projects/masri/.
36
+
37
+ The specific list of corpora used to fine-tune the model is:
38
+
39
+ - MASRI-HEADSET v2 (6h39m)
40
+ - MASRI-Farfield (9h37m)
41
+ - MASRI-Booths (2h27m)
42
+ - MASRI-MEP (1h17m)
43
+ - MASRI-COMVO (7h29m)
44
+ - MASRI-TUBE (13h17m)
45
+ - MASRI-MERLIN (25h18m) *Not available at the MASRI Project homepage
46
+
47
+ The fine-tuning process was perform during November (2022) in the servers of the Language and Voice Lab (https://lvl.ru.is/) at Reykjavík University (Iceland) by Carlos Daniel Hernández Mena.
48
+
49
+ # Evaluation
50
+
51
+ ```python
52
+ import torch
53
+ from transformers import Wav2Vec2Processor
54
+ from transformers import Wav2Vec2ForCTC
55
+
56
+ #Load the processor and model.
57
+ MODEL_NAME="carlosdanielhernandezmena/wav2vec2-large-xlsr-53-maltese-64h"
58
+ processor = Wav2Vec2Processor.from_pretrained(MODEL_NAME)
59
+ model = Wav2Vec2ForCTC.from_pretrained(MODEL_NAME)
60
+
61
+ #Load the dataset
62
+ from datasets import load_dataset, load_metric, Audio
63
+ ds=load_dataset("common_voice", "mt", split="test")
64
+
65
+ #Normalize the transcriptions
66
+ import re
67
+ chars_to_ignore_regex = '[\\,\\?\\.\\!\\\;\\:\\"\\“\\%\\‘\\”\\�\\)\\(\\*)]'
68
+ def remove_special_characters(batch):
69
+ batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
70
+ return batch
71
+ ds = ds.map(remove_special_characters)
72
+
73
+ #Downsample to 16kHz
74
+ ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
75
+
76
+ #Process the dataset
77
+ def prepare_dataset(batch):
78
+ audio = batch["audio"]
79
+ #Batched output is "un-batched" to ensure mapping is correct
80
+ batch["input_values"] = processor(audio["array"], sampling_rate=audio["sampling_rate"]).input_values[0]
81
+ with processor.as_target_processor():
82
+ batch["labels"] = processor(batch["sentence"]).input_ids
83
+ return batch
84
+ ds = ds.map(prepare_dataset, remove_columns=ds.column_names,num_proc=1)
85
+
86
+ #Define the evaluation metric
87
+ import numpy as np
88
+ wer_metric = load_metric("wer")
89
+ def compute_metrics(pred):
90
+ pred_logits = pred.predictions
91
+ pred_ids = np.argmax(pred_logits, axis=-1)
92
+ pred.label_ids[pred.label_ids == -100] = processor.tokenizer.pad_token_id
93
+ pred_str = processor.batch_decode(pred_ids)
94
+ #We do not want to group tokens when computing the metrics
95
+ label_str = processor.batch_decode(pred.label_ids, group_tokens=False)
96
+ wer = wer_metric.compute(predictions=pred_str, references=label_str)
97
+ return {"wer": wer}
98
+
99
+ #Do the evaluation (with batch_size=1)
100
+ model = model.to(torch.device("cuda"))
101
+ def map_to_result(batch):
102
+ with torch.no_grad():
103
+ input_values = torch.tensor(batch["input_values"], device="cuda").unsqueeze(0)
104
+ logits = model(input_values).logits
105
+ pred_ids = torch.argmax(logits, dim=-1)
106
+ batch["pred_str"] = processor.batch_decode(pred_ids)[0]
107
+ batch["sentence"] = processor.decode(batch["labels"], group_tokens=False)
108
+ return batch
109
+ results = ds.map(map_to_result,remove_columns=ds.column_names)
110
+
111
+ #Compute the overall WER now.
112
+ print("Test WER: {:.3f}".format(wer_metric.compute(predictions=results["pred_str"], references=results["sentence"])))
113
+
114
+ ```
115
+ **Test Result**: 0.011
116
+
117
+ # BibTeX entry and citation info
118
+ *When publishing results based on these models please refer to:*
119
+ ```bibtex
120
+ @misc{mena2022xlrs53maltese,
121
+ title={Acustic Model in Maltese: wav2vec2-large-xlsr-53-maltese-64h.},
122
+ author={Hernandez Mena, Carlos Daniel},
123
+ year={2022},
124
+ url={https://huggingface.co/carlosdanielhernandezmena/wav2vec2-large-xlsr-53-maltese-64h},
125
+ }
126
+ ```
127
+
128
+ # Acknowledgements
129
+
130
+ The MASRI Project is funded by the University of Malta Research Fund Awards. We want to thank to Merlin Publishers (Malta) for provinding the audiobooks used to create the MASRI-MERLIN Corpus.
131
+
132
+ Special thanks to Jón Guðnason, head of the Language and Voice Lab for providing computational power to make this model possible. We also want to thank to the "Language Technology Programme for Icelandic 2019-2023" which is managed and coordinated by Almannarómur, and it is funded by the Icelandic Ministry of Education, Science and Culture.
133
+