Whisper Small GA-EN Speech Translation

This model is a fine-tuned version of openai/whisper-small on the IWSLT-2023, FLEURS, BiteSize, SpokenWords, Tatoeba, and Wikimedia dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3732
  • Bleu: 31.78
  • Chrf: 47.41
  • Wer: 62.3143

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.03
  • training_steps: 3000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Bleu Chrf Validation Loss Wer
2.4081 0.0438 100 9.25 24.92 1.8707 101.1706
1.9316 0.0876 200 13.88 32.67 1.5419 101.5308
1.5533 0.1313 300 17.52 36.68 1.4046 83.1607
1.3403 0.1751 400 17.71 37.97 1.3565 93.5615
1.1303 0.2189 500 19.24 40.02 1.3224 82.0801
0.9892 0.2627 600 26.07 43.16 1.3077 72.8050
0.9005 0.3065 700 27.37 44.24 1.2918 64.1603
0.7547 0.3503 800 27.68 44.28 1.2754 66.0964
0.7199 0.3940 900 23.99 42.99 1.2895 78.0729
0.6095 0.4378 1000 1.2716 16.56 41.2 116.5691
0.5072 0.4816 1100 1.2901 25.39 44.04 75.2364
0.4599 0.5254 1200 1.2634 28.45 45.71 67.0419
0.3987 0.5692 1300 1.3004 25.84 45.63 75.1013
0.3443 0.6130 1400 1.2871 29.09 46.25 65.4210
0.2882 0.6567 1500 1.3242 29.14 44.4 66.0063
0.2687 0.7005 1600 1.3135 22.9 43.76 92.2557
0.2059 0.7443 1700 1.3160 31.13 47.45 63.6650
0.1991 0.7881 1800 1.2960 31.45 47.47 63.6650
0.1523 0.8319 1900 1.3215 31.21 47.38 64.1153
0.1349 0.8757 2000 1.3402 30.58 46.32 63.7551
0.111 0.9194 2100 1.3311 30.92 48.17 62.2242
0.1055 0.9632 2200 1.3548 30.56 46.56 65.2409
0.0525 1.0070 2300 1.3754 31.28 48.1 64.2954
0.0498 1.0508 2400 1.3729 31.16 47.8 61.7290
0.0372 1.0946 2500 1.3498 32.13 48.77 61.4588
0.029 1.1384 2600 1.3723 32.04 48.32 61.8640
0.0285 1.1821 2700 1.3748 31.91 47.58 61.8640
0.0292 1.2259 2800 1.3764 31.92 47.96 61.1887
0.025 1.2697 2900 1.3799 31.64 47.47 62.2242
0.0253 1.3135 3000 1.3732 31.78 47.41 62.3143

Framework versions

  • Transformers 4.41.2
  • Pytorch 2.2.0+cu121
  • Datasets 2.19.2
  • Tokenizers 0.19.1
Downloads last month
20
Safetensors
Model size
242M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for ymoslem/whisper-small-ga2en-v5.6-r

Finetuned
(2020)
this model

Datasets used to train ymoslem/whisper-small-ga2en-v5.6-r

Evaluation results

  • Bleu on IWSLT-2023, FLEURS, BiteSize, SpokenWords, Tatoeba, and Wikimedia
    self-reported
    31.780
  • Wer on IWSLT-2023, FLEURS, BiteSize, SpokenWords, Tatoeba, and Wikimedia
    self-reported
    62.314