whisper-small-ar

This model is a fine-tuned version of openai/whisper-small on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1581
  • Wer Ortho: 27.8630
  • Wer: 5.0804

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: constant_with_warmup
  • lr_scheduler_warmup_steps: 50
  • training_steps: 500
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer Ortho Wer
0.7097 0.0842 50 0.5331 51.0035 11.0584
0.3752 0.1684 100 0.3281 42.8571 8.3150
0.221 0.2525 150 0.2201 36.9540 6.8078
0.1948 0.3367 200 0.1996 33.5891 6.3167
0.1947 0.4209 250 0.1833 31.1098 5.8425
0.1708 0.5051 300 0.1752 29.3388 5.2667
0.1667 0.5892 350 0.1680 28.5714 5.3514
0.1606 0.6734 400 0.1631 28.8076 5.1312
0.1525 0.7576 450 0.1600 28.6305 5.2159
0.1513 0.8418 500 0.1581 27.8630 5.0804

Framework versions

  • Transformers 4.41.1
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
9
Safetensors
Model size
242M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for sophiayk20/whisper-small-ar

Finetuned
(1990)
this model