Edit model card

Whisper Small ta - Lingalingeswaran

This model is a fine-tuned version of openai/whisper-small on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2150
  • Wer: 43.3196

Model description

This Whisper model has been fine-tuned specifically for the Tamil language using the Common Voice 11.0 dataset. It is designed to handle tasks such as speech-to-text transcription and language identification, making it suitable for applications where Tamil is a primary language of interest. The fine-tuning process focused on enhancing performance for Tamil, aiming to reduce the error rate in transcriptions and improve general accuracy.

Intended uses & limitations

Intended Uses: Speech-to-text transcription in Tamil

Limitations: May not perform as well on languages or dialects that are not well-represented in the Common Voice dataset. Higher Word Error Rate (WER) in noisy environments or with speakers who have heavy accents not covered in the training data. The model is optimized for Tamil; performance in other languages may be suboptimal.

Training and evaluation data

The training data for this model consists of voice recordings in Tamil from the Mozilla-foundation/Common Voice 11.0 dataset. The dataset is a crowd-sourced collection of transcribed speech, ensuring diversity in terms of speaker accents, age groups, and speech styles.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 4000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.1753 0.2992 1000 0.2705 51.0174
0.1404 0.5984 2000 0.2368 46.9969
0.1344 0.8977 3000 0.2196 44.5325
0.0947 1.1969 4000 0.2150 43.3196

Framework versions

  • Transformers 4.45.2
  • Pytorch 2.4.1+cu121
  • Datasets 3.0.1
  • Tokenizers 0.20.1
Downloads last month
33
Safetensors
Model size
242M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Lingalingeswaran/whisper-small-ta

Finetuned
(1965)
this model

Dataset used to train Lingalingeswaran/whisper-small-ta

Evaluation results