Whisper Small Pashto

This model is a fine-tuned version of openai/whisper-small on the google/fleurs ps_af dataset. It achieves the following results on the evaluation set:

  • Loss: 1.2273
  • Wer: 56.6510

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-07
  • train_batch_size: 64
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 800
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
2.1183 3.7 100 1.3170 76.9522
0.8565 7.41 200 0.9367 61.9930
0.2246 11.11 300 0.9642 58.8302
0.054 14.81 400 1.0876 57.9903
0.0159 18.52 500 1.1798 57.8768
0.0045 22.22 600 1.2309 56.6510
0.0026 100.0 700 1.2581 56.8478
0.0023 114.29 800 1.2710 56.7570

Framework versions

  • Transformers 4.26.0.dev0
  • Pytorch 1.13.1+cu117
  • Datasets 2.7.1.dev0
  • Tokenizers 0.13.2
Downloads last month
29
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train ihanif/whisper-small-pashto-dropout

Evaluation results