pere's picture
Update README.md
57711f9
|
raw
history blame
5.8 kB
metadata
license: cc-by-sa-3.0
tags:
  - automatic-speech-recognition
  - NbAiLab/NPSC
  - 'no'
  - nn
  - nn-NO
datasets:
  - NbAiLab/NPSC
language:
  - nn-NO
model-index:
  - name: nb-wav2vec2-300m-nynorsk
    results:
      - task:
          name: Automatic Speech Recognition
          type: automatic-speech-recognition
        dataset:
          name: NPSC
          type: NbAiLab/NPSC
          args: 16K_mp3_nynorsk
        metrics:
          - name: Test (Nynorsk) WER
            type: wer
            value: 0.1222
          - name: Test (Nynorsk) CER
            type: cer
            value: 0.0419

Norwegian Wav2Vec2 Model - 300M - VoxRex - Nynorsk

This model is finetuned on top of feature extractor VoxRex-model from the National Library of Sweeden. The finetuned model achieves the following results on the test set with a 5-gram KenLM. Numbers in parenthesis are without the language model:

  • WER: 0.1222 (0.1537)
  • CER: 0.0419 (0.0468)

Model description

This is one of several Wav2Vec-models our team created during the 🤗 hosted Robust Speech Event. This is the complete list of our models and their final scores:

Model Final WER
NbAiLab/nb-wav2vec2-1b-bokmaal 6.33
NbAiLab/nb-wav2vec2-300m-bokmaal 7.03
NbAiLab/nb-wav2vec2-300m-nynorsk (this model) 12.22

Dataset

In parallell with the event, the team also converted the Norwegian Parliamentary Speech Corpus (NPSC) to the NbAiLab/NPSC in 🤗 Dataset format and used that as the main source for training.

Code

We do release all code developed during the event so that the Norwegian NLP community can build upon this to develop even better Norwegian ASR models. The finetuning of these models are not very compute demanding. You should after following the instructions here, be able to train your own automatic speech recognition system in less than a day with an average GPU.

Team

The following people contributed to building this model: Rolv-Arild Braaten, Per Egil Kummervold, Andre Kåsen, Javier de la Rosa, Per Erik Solberg, and Freddy Wetjen.

Training procedure

To reproduce these results, we strongly recommend that you follow the instructions from 🤗 to train a simple Swedish model.

When you have verified that you are able to do this, create a fresh new repo. You can then start by copying the files run.sh and run_speech_recognition_ctc.py from our repo. Running this will create all the other necessary files, and should let you reproduce our results. With some tweaks to the hyperparameters, you might even be able to build an even better ASR. Good luck!

Language Model

As you see from the results above, adding even a simple 5-gram language will improve the results. 🤗 has provided another very nice blog about how to add a 5-gram language model to improve the ASR model. You can build this from your own corpus, for instance by extracting some suitable text from the Norwegian Colossal Corpus. You can also skip some of the steps in the guide, and copy the 5-gram model from this repo.

Parameters

The final model was run using these parameters:

--dataset_name="NbAiLab/NPSC" 
--model_name_or_path="KBLab/wav2vec2-large-voxrex" 
--dataset_config_name="16K_mp3_nynorsk" 
--output_dir="./" 
--overwrite_output_dir 
--num_train_epochs="80" 
--per_device_train_batch_size="16" 
--per_device_eval_batch_size="16" 
--gradient_accumulation_steps="2" 
--learning_rate="1e-4" 
--warmup_steps="2000" 
--length_column_name="input_length" 
--evaluation_strategy="steps" 
--text_column_name="text" 
--save_steps="500" 
--eval_steps="500" 
--logging_steps="100" 
--layerdrop="0.041" 
--attention_dropout="0.094" 
--activation_dropout="0.055" 
--hidden_dropout="0.047" 
--save_total_limit="3" 
--freeze_feature_encoder 
--feat_proj_dropout="0.04" 
--mask_time_prob="0.082" 
--mask_time_length="10" 
--mask_feature_prob="0.25" 
--mask_feature_length="64" 
--gradient_checkpointing 
--min_duration_in_seconds="0.5" 
--max_duration_in_seconds="30.0" 
--use_auth_token 
--seed="42" 
--fp16 
--group_by_length 
--do_train --do_eval 
--push_to_hub 
--preprocessing_num_workers="32"

Following this settings, the training might take 3-4 days on an average GPU. You should however get a decent model and faster results by tweaking these parameters

Parameter Comment
per_device_train_batch_size Adjust this to the maximum of available memory. 16 or 24 might be good settings depending on your system
gradient_accumulation_steps Can be adjusted even further up to increase batch size and speed up training without running into memory issues
learning_rate Can be increased, maybe as high as 1e-4. Speeds up training but might add instability
epochs Can be decreased significantly. This is a huge dataset and you might get a decent result already after a couple of epochs