Edit model card

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

whisper-large-v3-Tamil-Version1

This model is a fine-tuned version of openai/whisper-large-v3 on the fleurs dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2299
  • Wer: 40.1989

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-06
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 1000
  • training_steps: 20000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.2648 5.8309 2000 0.2695 46.8731
0.2368 11.6618 4000 0.2503 45.3660
0.2151 17.4927 6000 0.2414 43.2643
0.2121 23.3236 8000 0.2367 41.9315
0.2069 29.1545 10000 0.2339 40.9165
0.2038 34.9854 12000 0.2322 40.7115
0.1936 40.8163 14000 0.2309 40.6807
0.1871 46.6472 16000 0.2304 40.4142
0.1901 52.4781 18000 0.2298 40.3014
0.1885 58.3090 20000 0.2299 40.1989

Framework versions

  • PEFT 0.12.1.dev0
  • Transformers 4.45.0.dev0
  • Pytorch 2.4.0+cu121
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for khushi1234455687/whisper-large-v3-Tamil-Version1

Finetuned
this model

Dataset used to train khushi1234455687/whisper-large-v3-Tamil-Version1