carlosdanielhernandezmena's picture
Update README.md
a3e419b
|
raw
history blame
4.86 kB
metadata
language: fo
datasets:
  - ravnursson
tags:
  - audio
  - automatic-speech-recognition
  - faroese
  - xlrs-53-faroese
  - ravnur-project
  - faroe-islands
license: cc-by-4.0
widget: null
model-index:
  - name: wav2vec2-large-xlsr-53-faroese-100h
    results:
      - task:
          name: Automatic Speech Recognition
          type: automatic-speech-recognition
        dataset:
          name: Ravnursson (Test)
          type: carlosdanielhernandezmena/ravnursson_asr
          split: test
          args:
            language: fo
        metrics:
          - name: Test WER
            type: wer
            value: 7.6
      - task:
          name: Automatic Speech Recognition
          type: automatic-speech-recognition
        dataset:
          name: Ravnursson (Dev)
          type: carlosdanielhernandezmena/ravnursson_asr
          split: validation
          args:
            language: fo
        metrics:
          - name: Test WER
            type: wer
            value: 5.5

wav2vec2-large-xlsr-53-faroese-100h

The "wav2vec2-large-xlsr-53-faroese-100h" is an acoustic model suitable for Automatic Speech Recognition in Faroese. It is the result of fine-tuning the model "facebook/wav2vec2-large-xlsr-53" with 100 hours of Faroese data released by the Ravnur Project (https://maltokni.fo/en/) from the Faroe Islands.

The specific dataset used to create the model is called "Ravnursson Faroese Speech and Transcripts" and it is available at http://hdl.handle.net/20.500.12537/276.

The fine-tuning process was perform during November (2022) in the servers of the Language and Voice Lab (https://lvl.ru.is/) at Reykjavík University (Iceland) by Carlos Daniel Hernández Mena.

Evaluation

import torch
from transformers import Wav2Vec2Processor
from transformers import Wav2Vec2ForCTC

#Load the processor and model.
MODEL_NAME="carlosdanielhernandezmena/wav2vec2-large-xlsr-53-faroese-100h"
processor = Wav2Vec2Processor.from_pretrained(MODEL_NAME)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_NAME)

#Load the dataset
from datasets import load_dataset, load_metric, Audio
ds=load_dataset("carlosdanielhernandezmena/ravnursson_asr",split='test')

#Downsample to 16kHz
ds = ds.cast_column("audio", Audio(sampling_rate=16_000))

#Process the dataset
def prepare_dataset(batch):
    audio = batch["audio"]
    #Batched output is "un-batched" to ensure mapping is correct
    batch["input_values"] = processor(audio["array"], sampling_rate=audio["sampling_rate"]).input_values[0]
    with processor.as_target_processor():
        batch["labels"] = processor(batch["normalized_text"]).input_ids
    return batch
ds = ds.map(prepare_dataset, remove_columns=ds.column_names,num_proc=1)

#Define the evaluation metric
import numpy as np
wer_metric = load_metric("wer")
def compute_metrics(pred):
    pred_logits = pred.predictions
    pred_ids = np.argmax(pred_logits, axis=-1)
    pred.label_ids[pred.label_ids == -100] = processor.tokenizer.pad_token_id
    pred_str = processor.batch_decode(pred_ids)
    #We do not want to group tokens when computing the metrics
    label_str = processor.batch_decode(pred.label_ids, group_tokens=False)
    wer = wer_metric.compute(predictions=pred_str, references=label_str)
    return {"wer": wer}

#Do the evaluation (with batch_size=1)
model = model.to(torch.device("cuda"))
def map_to_result(batch):
    with torch.no_grad():
        input_values = torch.tensor(batch["input_values"], device="cuda").unsqueeze(0)
        logits = model(input_values).logits
    pred_ids = torch.argmax(logits, dim=-1)
    batch["pred_str"] = processor.batch_decode(pred_ids)[0]
    batch["sentence"] = processor.decode(batch["labels"], group_tokens=False)
    return batch
results = ds.map(map_to_result,remove_columns=ds.column_names)

#Compute the overall WER now.
print("Test WER: {:.3f}".format(wer_metric.compute(predictions=results["pred_str"], references=results["sentence"])))

Test Result: 0.076

BibTeX entry and citation info

When publishing results based on these models please refer to:

@misc{mena2022xlrs53faroese,
      title={Acoustic Model in Faroese: wav2vec2-large-xlsr-53-faroese-100h.}, 
      author={Hernandez Mena, Carlos Daniel},
      year={2022},
      url={https://huggingface.co/carlosdanielhernandezmena/wav2vec2-large-xlsr-53-faroese-100h},
}

Acknowledgements

We want to thank to Jón Guðnason, head of the Language and Voice Lab for providing computational power to make this model possible. We also want to thank to the "Language Technology Programme for Icelandic 2019-2023" which is managed and coordinated by Almannarómur, and it is funded by the Icelandic Ministry of Education, Science and Culture.

Special thanks to Annika Simonsen and to The Ravnur Project for making their "Basic Language Resource Kit"(BLARK 1.0) publicly available through the research paper "Creating a Basic Language Resource Kit for Faroese" https://aclanthology.org/2022.lrec-1.495.pdf