DewiBrynJones's picture
Training in progress, step 500
487b0d0 verified
|
raw
history blame
4.63 kB
metadata
license: apache-2.0
base_model: DewiBrynJones/wav2vec2-xlsr-53-ft-btb-cv-cy
tags:
  - automatic-speech-recognition
  - ./data-configs/btb.json
  - generated_from_trainer
metrics:
  - wer
model-index:
  - name: wav2vec2-btb-cv-ft-btb-cy-cand
    results: []

wav2vec2-btb-cv-ft-btb-cy-cand

This model is a fine-tuned version of DewiBrynJones/wav2vec2-xlsr-53-ft-btb-cv-cy on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4345
  • Wer: 0.3308

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0003
  • train_batch_size: 4
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 1000
  • training_steps: 10000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
No log 0.0285 200 1.2522 0.6292
No log 0.0570 400 0.6599 0.4544
2.2791 0.0854 600 0.6629 0.4395
2.2791 0.1139 800 0.7910 0.5453
0.8206 0.1424 1000 0.7758 0.5701
0.8206 0.1709 1200 0.8025 0.5783
0.8206 0.1994 1400 0.7715 0.5211
0.9068 0.2279 1600 0.7349 0.5128
0.9068 0.2563 1800 0.7258 0.5152
0.8679 0.2848 2000 0.7084 0.5216
0.8679 0.3133 2200 0.6904 0.5014
0.8679 0.3418 2400 0.6993 0.5178
0.8577 0.3703 2600 0.6746 0.4867
0.8577 0.3987 2800 0.6622 0.4963
0.7995 0.4272 3000 0.6793 0.4935
0.7995 0.4557 3200 0.6368 0.4701
0.7995 0.4842 3400 0.6363 0.4781
0.8141 0.5127 3600 0.6217 0.4656
0.8141 0.5412 3800 0.6418 0.4940
0.7953 0.5696 4000 0.6018 0.4542
0.7953 0.5981 4200 0.5962 0.4580
0.7953 0.6266 4400 0.5883 0.4459
0.7596 0.6551 4600 0.5788 0.4325
0.7596 0.6836 4800 0.5709 0.4412
0.7533 0.7120 5000 0.5595 0.4352
0.7533 0.7405 5200 0.5546 0.4232
0.7533 0.7690 5400 0.5545 0.4244
0.7591 0.7975 5600 0.5443 0.4076
0.7591 0.8260 5800 0.5341 0.4146
0.6621 0.8545 6000 0.5104 0.3955
0.6621 0.8829 6200 0.5139 0.4011
0.6621 0.9114 6400 0.5044 0.3804
0.6705 0.9399 6600 0.4999 0.3896
0.6705 0.9684 6800 0.5097 0.4053
0.6665 0.9969 7000 0.4925 0.3785
0.6665 1.0253 7200 0.4896 0.3689
0.6665 1.0538 7400 0.4749 0.3687
0.5826 1.0823 7600 0.4684 0.3628
0.5826 1.1108 7800 0.4729 0.3585
0.5836 1.1393 8000 0.4641 0.3553
0.5836 1.1678 8200 0.4575 0.3530
0.5836 1.1962 8400 0.4585 0.3486
0.5199 1.2247 8600 0.4549 0.3451
0.5199 1.2532 8800 0.4521 0.3408
0.5268 1.2817 9000 0.4425 0.3395
0.5268 1.3102 9200 0.4407 0.3362
0.5268 1.3386 9400 0.4383 0.3340
0.5013 1.3671 9600 0.4357 0.3325
0.5013 1.3956 9800 0.4350 0.3317
0.5095 1.4241 10000 0.4345 0.3308

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.4.0+cu121
  • Datasets 2.21.0
  • Tokenizers 0.19.1