CHUNGNAM_FM_AddressesM_model

This model is a fine-tuned version of openai/whisper-medium on the Marcusxx/CHUNGNAM_Addresses_NO_NUM dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2603
  • Cer: 6.2263

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • training_steps: 20000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Cer
0.1938 0.6906 1000 0.2020 5.9531
0.1554 1.3812 2000 0.1852 5.9452
0.1048 2.0718 3000 0.1793 5.8234
0.1126 2.7624 4000 0.1794 7.6374
0.0695 3.4530 5000 0.1922 6.2990
0.0382 4.1436 6000 0.1999 6.2872
0.0385 4.8343 7000 0.2019 7.5529
0.0203 5.5249 8000 0.2141 7.6944
0.0142 6.2155 9000 0.2211 6.0239
0.0129 6.9061 10000 0.2190 8.6417
0.0109 7.5967 11000 0.2262 8.0187
0.0062 8.2873 12000 0.2286 10.8626
0.0074 8.9779 13000 0.2323 7.1874
0.005 9.6685 14000 0.2370 7.7829
0.0046 10.3591 15000 0.2415 6.2243
0.0021 11.0497 16000 0.2459 6.0946
0.002 11.7403 17000 0.2474 6.1713
0.0009 12.4309 18000 0.2572 6.0887
0.0001 13.1215 19000 0.2582 6.2715
0.0002 13.8122 20000 0.2603 6.2263

Framework versions

  • Transformers 4.41.2
  • Pytorch 2.2.2+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
65
Safetensors
Model size
764M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for Marcusxx/CHUNGNAM_FM_AddressesM_model

Finetuned
(470)
this model

Dataset used to train Marcusxx/CHUNGNAM_FM_AddressesM_model