Edit model card

urdu-emotions-hubert-large-Emotion

This model is a fine-tuned version of facebook/hubert-base-ls960 on the audiofolder dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3374
  • Accuracy: 0.8167

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.2
  • num_epochs: 60

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 0.9333 7 0.9539 0.7833
0.1337 2.0 15 0.8845 0.8167
0.1048 2.9333 22 1.0824 0.75
0.0406 4.0 30 1.0672 0.8
0.0406 4.9333 37 1.2542 0.75
0.0931 6.0 45 0.9778 0.8
0.0646 6.9333 52 1.2557 0.75
0.0966 8.0 60 1.0541 0.7833
0.0966 8.9333 67 1.5521 0.75
0.1301 10.0 75 0.9688 0.8333
0.2704 10.9333 82 1.2517 0.7667
0.1625 12.0 90 1.3938 0.7667
0.1625 12.9333 97 1.4804 0.7667
0.1278 14.0 105 0.9219 0.8167
0.2052 14.9333 112 1.2735 0.75
0.2487 16.0 120 1.0251 0.7833
0.2487 16.9333 127 1.1808 0.8
0.1784 18.0 135 1.2522 0.7333
0.2182 18.9333 142 0.8958 0.8333
0.1688 20.0 150 1.1747 0.75
0.1688 20.9333 157 1.3938 0.8
0.2948 22.0 165 0.6410 0.8833
0.0945 22.9333 172 0.8846 0.8333
0.0738 24.0 180 0.7653 0.8333
0.0738 24.9333 187 0.7587 0.8333
0.0909 26.0 195 1.1861 0.8
0.0721 26.9333 202 0.8185 0.8333
0.1215 28.0 210 1.4169 0.7333
0.1215 28.9333 217 1.1844 0.8
0.0454 30.0 225 1.1273 0.7833
0.0915 30.9333 232 1.3536 0.8
0.0274 32.0 240 1.1561 0.7667
0.0274 32.9333 247 1.2680 0.7833
0.0251 34.0 255 1.3334 0.8
0.1263 34.9333 262 1.2555 0.8167
0.0389 36.0 270 1.0567 0.8
0.0389 36.9333 277 1.5755 0.7667
0.109 38.0 285 1.5332 0.7667
0.0599 38.9333 292 1.0758 0.85
0.0064 40.0 300 1.1251 0.85
0.0064 40.9333 307 1.3546 0.8
0.003 42.0 315 1.4129 0.8
0.0303 42.9333 322 1.3925 0.8
0.0016 44.0 330 1.3129 0.7833
0.0016 44.9333 337 1.2522 0.8
0.0308 46.0 345 1.3130 0.8167
0.002 46.9333 352 1.3005 0.8333
0.003 48.0 360 1.3434 0.8
0.003 48.9333 367 1.3762 0.8
0.0024 50.0 375 1.4090 0.8
0.0778 50.9333 382 1.3769 0.8167
0.0024 52.0 390 1.3748 0.8167
0.0024 52.9333 397 1.3649 0.8167
0.0132 54.0 405 1.3365 0.8167
0.0102 54.9333 412 1.3363 0.8167
0.0012 56.0 420 1.3374 0.8167

Framework versions

  • Transformers 4.40.2
  • Pytorch 2.2.1+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
22
Safetensors
Model size
86.2M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for growingpenguin/urdu-emotions-hubert-large-Emotion

Finetuned
(68)
this model

Evaluation results