aydink commited on
Commit
6bfbf6d
1 Parent(s): de143a3

Model save

Browse files
Files changed (1) hide show
  1. README.md +20 -22
README.md CHANGED
@@ -20,13 +20,13 @@ should probably proofread and complete it, then remove this comment. -->
20
 
21
  This model is a fine-tuned version of [facebook/hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960) on the None dataset.
22
  It achieves the following results on the evaluation set:
23
- - Loss: 0.0353
24
- - Accuracy: 1.0
25
- - F1: 1.0
26
- - Recall: 1.0
27
- - Precision: 1.0
28
- - Mcc: 1.0
29
- - Auc: 1.0
30
 
31
  ## Model description
32
 
@@ -45,12 +45,10 @@ More information needed
45
  ### Training hyperparameters
46
 
47
  The following hyperparameters were used during training:
48
- - learning_rate: 3e-05
49
  - train_batch_size: 8
50
  - eval_batch_size: 8
51
  - seed: 42
52
- - gradient_accumulation_steps: 16
53
- - total_train_batch_size: 128
54
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
55
  - lr_scheduler_type: linear
56
  - lr_scheduler_warmup_ratio: 0.1
@@ -61,21 +59,21 @@ The following hyperparameters were used during training:
61
 
62
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall | Precision | Mcc | Auc |
63
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:---------:|:------:|:------:|
64
- | 0.1939 | 0.96 | 12 | 0.0971 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
65
- | 0.2049 | 2.0 | 25 | 0.0784 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
66
- | 0.1763 | 2.96 | 37 | 0.0645 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
67
- | 0.1441 | 4.0 | 50 | 0.0972 | 0.985 | 0.9850 | 0.985 | 0.9860 | 0.9815 | 0.9994 |
68
- | 0.1264 | 4.96 | 62 | 0.0627 | 0.9925 | 0.9925 | 0.9925 | 0.9928 | 0.9907 | 1.0 |
69
- | 0.1148 | 6.0 | 75 | 0.0426 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
70
- | 0.1114 | 6.96 | 87 | 0.0394 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
71
- | 0.0911 | 8.0 | 100 | 0.0365 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
72
- | 0.078 | 8.96 | 112 | 0.0358 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
73
- | 0.0797 | 9.6 | 120 | 0.0353 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
74
 
75
 
76
  ### Framework versions
77
 
78
- - Transformers 4.40.2
79
- - Pytorch 2.2.1+cu121
80
  - Datasets 2.19.1
81
  - Tokenizers 0.19.1
 
20
 
21
  This model is a fine-tuned version of [facebook/hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960) on the None dataset.
22
  It achieves the following results on the evaluation set:
23
+ - Loss: 0.2002
24
+ - Accuracy: 0.955
25
+ - F1: 0.9549
26
+ - Recall: 0.9550
27
+ - Precision: 0.9551
28
+ - Mcc: 0.9438
29
+ - Auc: 0.9942
30
 
31
  ## Model description
32
 
 
45
  ### Training hyperparameters
46
 
47
  The following hyperparameters were used during training:
48
+ - learning_rate: 1e-05
49
  - train_batch_size: 8
50
  - eval_batch_size: 8
51
  - seed: 42
 
 
52
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
53
  - lr_scheduler_type: linear
54
  - lr_scheduler_warmup_ratio: 0.1
 
59
 
60
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall | Precision | Mcc | Auc |
61
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:---------:|:------:|:------:|
62
+ | 1.5544 | 1.0 | 200 | 1.5193 | 0.405 | 0.3628 | 0.4050 | 0.5940 | 0.2904 | 0.8407 |
63
+ | 1.1406 | 2.0 | 400 | 0.9811 | 0.6375 | 0.5780 | 0.6375 | 0.6712 | 0.5734 | 0.9464 |
64
+ | 0.7902 | 3.0 | 600 | 0.6775 | 0.8125 | 0.7969 | 0.8125 | 0.8181 | 0.7740 | 0.9724 |
65
+ | 0.5346 | 4.0 | 800 | 0.5083 | 0.8725 | 0.8683 | 0.8725 | 0.8774 | 0.8438 | 0.9834 |
66
+ | 0.5139 | 5.0 | 1000 | 0.3943 | 0.9025 | 0.8988 | 0.9025 | 0.9074 | 0.8809 | 0.9879 |
67
+ | 0.5136 | 6.0 | 1200 | 0.3314 | 0.915 | 0.9145 | 0.915 | 0.9174 | 0.8945 | 0.9881 |
68
+ | 0.3726 | 7.0 | 1400 | 0.2894 | 0.925 | 0.9241 | 0.925 | 0.9258 | 0.9069 | 0.9878 |
69
+ | 0.3072 | 8.0 | 1600 | 0.2267 | 0.9325 | 0.9314 | 0.9325 | 0.9349 | 0.9167 | 0.9914 |
70
+ | 0.1948 | 9.0 | 1800 | 0.2117 | 0.945 | 0.9445 | 0.945 | 0.9461 | 0.9317 | 0.9931 |
71
+ | 0.2312 | 10.0 | 2000 | 0.2002 | 0.955 | 0.9549 | 0.9550 | 0.9551 | 0.9438 | 0.9942 |
72
 
73
 
74
  ### Framework versions
75
 
76
+ - Transformers 4.41.0
77
+ - Pytorch 2.3.0+cu121
78
  - Datasets 2.19.1
79
  - Tokenizers 0.19.1