FelixK7's picture
End of training
b8c0079 verified
metadata
library_name: transformers
language:
  - lv
license: apache-2.0
base_model: FelixK7/whisper-medium-lv
tags:
  - hf-asr-leaderboard
  - generated_from_trainer
datasets:
  - mozilla-foundation/common_voice_16_1
metrics:
  - wer
model-index:
  - name: Whisper medium LV - Felikss Kleins
    results:
      - task:
          name: Automatic Speech Recognition
          type: automatic-speech-recognition
        dataset:
          name: Common Voice 16.1
          type: mozilla-foundation/common_voice_16_1
          config: lv
          split: None
          args: 'config: lv, split: test'
        metrics:
          - name: Wer
            type: wer
            value: 9.459716154242761

Whisper medium LV - Felikss Kleins

This model is a fine-tuned version of FelixK7/whisper-medium-lv on the Common Voice 16.1 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2053
  • Wer: 9.4597

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 10000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
No log 0.02 200 0.1318 7.6741
0.0445 1.0199 400 0.1527 8.4475
0.0338 2.0199 600 0.1703 9.7148
0.0345 3.0198 800 0.1725 9.7392
0.0311 4.0198 1000 0.1789 9.8830
0.0311 5.0198 1200 0.1792 10.0187
0.0288 6.0197 1400 0.1858 9.6063
0.0237 7.0197 1600 0.1839 9.8803
0.022 8.0196 1800 0.1847 10.2955
0.0198 9.0196 2000 0.1878 9.8885
0.0198 10.0195 2200 0.1909 9.9237
0.0183 11.0195 2400 0.1948 10.1924
0.0161 12.0194 2600 0.1951 10.4122
0.0154 13.0193 2800 0.1952 9.9997
0.0141 14.0193 3000 0.1972 10.1001
0.0141 15.0192 3200 0.1976 10.1544
0.0118 16.0192 3400 0.2014 10.4258
0.0115 17.0191 3600 0.2021 10.6890
0.0106 18.0191 3800 0.2005 10.1951
0.0092 19.0191 4000 0.2022 10.4638
0.0092 20.019 4200 0.2003 10.0947
0.0089 21.0190 4400 0.2043 9.8776
0.0085 22.0189 4600 0.2063 10.4719
0.0083 23.0189 4800 0.2067 10.0540
0.0069 24.0188 5000 0.2058 9.7908
0.0069 25.0188 5200 0.2056 10.4583
0.0078 26.0187 5400 0.2090 10.1843
0.0063 27.0187 5600 0.2096 10.2250
0.0058 28.0186 5800 0.2047 10.2602
0.0052 29.0186 6000 0.2087 9.9319
0.0052 30.0185 6200 0.2040 10.0811
0.0054 31.0185 6400 0.2081 9.9482
0.0045 32.0184 6600 0.2063 9.6849
0.004 33.0183 6800 0.2077 10.0052
0.0035 34.0183 7000 0.2105 10.1056
0.0035 35.0183 7200 0.2075 9.6985
0.0035 36.0182 7400 0.2075 9.6063
0.003 37.0181 7600 0.2115 9.8396
0.0027 38.0181 7800 0.2061 9.5601
0.0025 39.0181 8000 0.2082 9.6252
0.0025 40.018 8200 0.2052 9.5520
0.0023 41.0179 8400 0.2060 9.7826
0.0024 42.0179 8600 0.2083 9.6361
0.002 43.0179 8800 0.2069 9.5981
0.0021 44.0178 9000 0.2051 9.3892
0.0021 45.0177 9200 0.2054 9.3756
0.0019 46.0177 9400 0.2049 9.5167
0.0017 47.0177 9600 0.2051 9.4733
0.0017 48.0176 9800 0.2050 9.4923
0.0014 49.0175 10000 0.2053 9.4597

Framework versions

  • Transformers 4.45.0.dev0
  • Pytorch 2.0.1
  • Datasets 3.0.0
  • Tokenizers 0.19.1