Edit model card

hubert-esc50-finetuned-v2

This model is a fine-tuned version of facebook/hubert-base-ls960 on the ESC-50 dataset. It achieves the following results on the evaluation set:

  • Loss: 1.9551
  • Accuracy: 0.85

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 100

Training results

Training Loss Epoch Step Validation Loss Accuracy
3.5337 1.0 200 3.4929 0.0775
3.1679 2.0 400 3.1355 0.1675
2.8042 3.0 600 2.8673 0.2075
2.5055 4.0 800 2.6202 0.2125
2.0268 5.0 1000 2.3768 0.3375
2.1337 6.0 1200 2.0171 0.4225
1.6061 7.0 1400 1.7294 0.5075
1.5169 8.0 1600 1.8017 0.5025
1.0634 9.0 1800 1.5051 0.5475
0.9651 10.0 2000 1.3431 0.635
0.8616 11.0 2200 1.3417 0.6375
0.6799 12.0 2400 1.2891 0.63
0.445 13.0 2600 1.2285 0.6575
0.2984 14.0 2800 1.2008 0.7125
0.5947 15.0 3000 1.3225 0.71
0.4194 16.0 3200 1.1032 0.775
0.3128 17.0 3400 1.8309 0.6625
0.237 18.0 3600 1.3349 0.7325
0.1701 19.0 3800 1.4491 0.7275
0.2618 20.0 4000 1.4919 0.7525
0.1336 21.0 4200 1.6088 0.7325
0.113 22.0 4400 1.3687 0.7725
0.0757 23.0 4600 1.4691 0.7875
0.0558 24.0 4800 1.8059 0.7525
0.1442 25.0 5000 1.7809 0.7475
0.1023 26.0 5200 1.8423 0.7875
0.0075 27.0 5400 1.7945 0.79
0.0054 28.0 5600 1.8221 0.7825
0.0584 29.0 5800 1.7593 0.785
0.07 30.0 6000 1.8601 0.7925
0.0827 31.0 6200 1.8467 0.7875
0.1128 32.0 6400 2.1020 0.765
0.2679 33.0 6600 2.0718 0.775
0.0647 34.0 6800 1.9542 0.7875
0.0376 35.0 7000 2.1877 0.7675
0.0019 36.0 7200 2.4088 0.745
0.1009 37.0 7400 2.2295 0.765
0.0039 38.0 7600 2.0022 0.7825
0.0006 39.0 7800 2.0640 0.795
0.0512 40.0 8000 2.3373 0.78
0.0282 41.0 8200 1.9908 0.795
0.0113 42.0 8400 2.3893 0.775
0.035 43.0 8600 2.3017 0.7775
0.006 44.0 8800 2.1261 0.7825
0.0556 45.0 9000 2.3122 0.775
0.0003 46.0 9200 2.1505 0.79
0.0115 47.0 9400 2.0387 0.805
0.0001 48.0 9600 2.1915 0.8
0.2299 49.0 9800 2.6715 0.76
0.0017 50.0 10000 2.7250 0.755
0.2944 51.0 10200 2.5766 0.79
0.1269 52.0 10400 2.3590 0.785
0.0941 53.0 10600 2.9789 0.755
0.0477 54.0 10800 2.7512 0.75
0.2068 55.0 11000 2.5162 0.7725
0.0004 56.0 11200 2.4355 0.7525
0.0657 57.0 11400 2.5043 0.7775
0.0002 58.0 11600 2.4236 0.785
0.0133 59.0 11800 2.4225 0.78
0.0 60.0 12000 2.3476 0.79
0.0159 61.0 12200 2.3234 0.7975
0.0002 62.0 12400 2.3763 0.78
0.0626 63.0 12600 2.0386 0.835
0.0112 64.0 12800 2.3345 0.81
0.0004 65.0 13000 2.3710 0.8075
0.0714 66.0 13200 2.0527 0.82
0.0008 67.0 13400 2.2063 0.8175
0.0001 68.0 13600 2.5772 0.795
0.0001 69.0 13800 2.4176 0.7975
0.0001 70.0 14000 2.1132 0.8125
0.0017 71.0 14200 2.2163 0.8125
0.2347 72.0 14400 2.0444 0.8275
0.0 73.0 14600 2.3745 0.8275
0.0001 74.0 14800 2.0128 0.8325
0.0037 75.0 15000 2.0867 0.8375
0.0 76.0 15200 2.2285 0.825
0.0001 77.0 15400 2.0214 0.8425
0.0001 78.0 15600 2.4193 0.82
0.0002 79.0 15800 2.4296 0.815
0.1198 80.0 16000 2.3698 0.8175
0.0001 81.0 16200 2.3521 0.82
0.0 82.0 16400 2.1241 0.8325
0.0001 83.0 16600 2.1642 0.8275
0.0005 84.0 16800 2.0545 0.835
0.0 85.0 17000 2.0386 0.8475
0.0003 86.0 17200 2.1348 0.83
0.0004 87.0 17400 2.2024 0.83
0.0 88.0 17600 2.1521 0.835
0.0001 89.0 17800 2.2244 0.83
0.0 90.0 18000 2.1535 0.8325
0.0 91.0 18200 2.2048 0.835
0.1711 92.0 18400 2.1023 0.83
0.0 93.0 18600 2.0534 0.845
0.0 94.0 18800 2.0220 0.845
0.0 95.0 19000 2.0061 0.845
0.0001 96.0 19200 1.9270 0.8475
0.0001 97.0 19400 1.9710 0.84
0.0001 98.0 19600 1.9561 0.845
0.0 99.0 19800 1.9567 0.845
0.0 100.0 20000 1.9551 0.85

Framework versions

  • Transformers 4.37.2
  • Pytorch 2.0.1+cu117
  • Datasets 2.16.1
  • Tokenizers 0.15.1
Downloads last month
10
Safetensors
Model size
94.6M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Wellyowo/hubert-esc50-finetuned-v2

Finetuned
(68)
this model