whisper-tiny-minds14-test-finetuned

This model is a fine-tuned version of openai/whisper-tiny on the PolyAI/minds14 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5522
  • Wer Ortho: 15.9236
  • Wer: 14.9260

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: constant_with_warmup
  • lr_scheduler_warmup_steps: 50
  • training_steps: 4000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer Ortho Wer
0.0009 15.15 500 0.4051 14.5587 13.5335
0.0003 30.3 1000 0.4404 14.8772 13.7511
0.0002 45.45 1500 0.4655 15.5596 14.4909
0.0001 60.61 2000 0.4870 15.4231 14.3168
0.0001 75.76 2500 0.5048 15.6961 14.6649
0.0 90.91 3000 0.5217 15.7871 14.7084
0.0 106.06 3500 0.5368 15.9691 14.9260
0.0 121.21 4000 0.5522 15.9236 14.9260

Framework versions

  • Transformers 4.38.2
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.1
  • Tokenizers 0.15.2
Downloads last month
16
Safetensors
Model size
37.8M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for JoshEe00/whisper-tiny-minds14-test-finetuned

Finetuned
(1237)
this model

Dataset used to train JoshEe00/whisper-tiny-minds14-test-finetuned

Evaluation results