File size: 3,394 Bytes
cc6c2d2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
edf3530
 
 
 
 
 
cc6c2d2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
214ab2d
edf3530
cc6c2d2
 
 
edf3530
cc6c2d2
 
edf3530
cc6c2d2
 
 
 
 
 
edf3530
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cc6c2d2
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
---

license: apache-2.0
base_model: facebook/hubert-base-ls960
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: hubert-classifier
  results: []
---


<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# hubert-classifier

This model is a fine-tuned version of [facebook/hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4637
- Accuracy: 0.5230
- Precision: 0.4945
- Recall: 0.5230
- F1: 0.4700
- Binary: 0.6634

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 3e-05

- train_batch_size: 64

- eval_batch_size: 32

- seed: 42

- gradient_accumulation_steps: 4

- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30

- mixed_precision_training: Native AMP



### Training results



| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1     | Binary |

|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:------:|

| No log        | 1.72  | 50   | 4.2179          | 0.0484   | 0.0065    | 0.0484 | 0.0105 | 0.3058 |

| No log        | 3.45  | 100  | 3.8319          | 0.1017   | 0.0846    | 0.1017 | 0.0618 | 0.3634 |

| No log        | 5.17  | 150  | 3.5448          | 0.1864   | 0.1327    | 0.1864 | 0.1311 | 0.4274 |

| No log        | 6.9   | 200  | 3.3129          | 0.2470   | 0.2063    | 0.2470 | 0.1855 | 0.4671 |

| No log        | 8.62  | 250  | 3.1207          | 0.3123   | 0.3090    | 0.3123 | 0.2599 | 0.5150 |

| No log        | 10.34 | 300  | 2.9535          | 0.3826   | 0.3524    | 0.3826 | 0.3277 | 0.5644 |

| No log        | 12.07 | 350  | 2.8121          | 0.4310   | 0.3894    | 0.4310 | 0.3695 | 0.5983 |

| No log        | 13.79 | 400  | 2.6726          | 0.4431   | 0.3939    | 0.4431 | 0.3775 | 0.6075 |

| No log        | 15.52 | 450  | 2.5597          | 0.4818   | 0.4413    | 0.4818 | 0.4206 | 0.6370 |

| 3.4474        | 17.24 | 500  | 2.4637          | 0.5230   | 0.4945    | 0.5230 | 0.4700 | 0.6634 |

| 3.4474        | 18.97 | 550  | 2.3747          | 0.5400   | 0.5111    | 0.5400 | 0.4920 | 0.6760 |

| 3.4474        | 20.69 | 600  | 2.3113          | 0.5545   | 0.5212    | 0.5545 | 0.5067 | 0.6872 |

| 3.4474        | 22.41 | 650  | 2.2492          | 0.5714   | 0.5475    | 0.5714 | 0.5274 | 0.7007 |

| 3.4474        | 24.14 | 700  | 2.2053          | 0.5738   | 0.5511    | 0.5738 | 0.5336 | 0.7015 |

| 3.4474        | 25.86 | 750  | 2.1757          | 0.5714   | 0.5477    | 0.5714 | 0.5283 | 0.7015 |

| 3.4474        | 27.59 | 800  | 2.1491          | 0.5908   | 0.5574    | 0.5908 | 0.5468 | 0.7140 |

| 3.4474        | 29.31 | 850  | 2.1403          | 0.5932   | 0.5625    | 0.5932 | 0.5506 | 0.7167 |





### Framework versions



- Transformers 4.38.2

- Pytorch 2.3.0

- Datasets 2.19.1

- Tokenizers 0.15.1