whisper-small-tiny / README.md
Skier8402's picture
Update README.md
01e685d verified
metadata
language:
  - sw
license: apache-2.0
base_model: openai/whisper-tiny
tags:
  - generated_from_trainer
  - asr
  - sst
  - swahili
datasets:
  - mozilla-foundation/common_voice_13_0
model-index:
  - name: Whisper Tiny Sw - Skier8402
    results: []
library_name: transformers
metrics:
  - wer

Whisper Tiny Sw - Skier8402

This model is a fine-tuned version of openai/whisper-tiny on the Common Voice 13 dataset using the swahili only.

Model description

More information needed.

Intended uses & limitations

The model was trained without enough noise added as a form of data augmentation. Do not use this production. I recommend using a larger version of whisper with more hyperparameter tuning especially the learning rate, momentum, weight decay and adjusting the batch size.

Training and evaluation data

I followed the tutorial here. Very minimum edits to the code were done following this tutorial.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: constant_with_warmup
  • lr_scheduler_warmup_steps: 50
  • training_steps: 500

Framework versions

  • Transformers 4.34.1
  • Pytorch 2.1.0+cu121
  • Datasets 2.14.5
  • Tokenizers 0.14.1