mina1369 commited on
Commit
bc113b1
1 Parent(s): 79dd54c

End of training

Browse files
Files changed (2) hide show
  1. README.md +18 -33
  2. model.safetensors +1 -1
README.md CHANGED
@@ -3,26 +3,11 @@ license: apache-2.0
3
  base_model: facebook/wav2vec2-base
4
  tags:
5
  - generated_from_trainer
6
- datasets:
7
- - asvp_esd
8
  metrics:
9
  - accuracy
10
  model-index:
11
  - name: my_awesome_emotion_model
12
- results:
13
- - task:
14
- name: Audio Classification
15
- type: audio-classification
16
- dataset:
17
- name: asvp_esd
18
- type: asvp_esd
19
- config: ASVP_ESD
20
- split: train
21
- args: ASVP_ESD
22
- metrics:
23
- - name: Accuracy
24
- type: accuracy
25
- value: 0.46430910281597904
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -30,10 +15,10 @@ should probably proofread and complete it, then remove this comment. -->
30
 
31
  # my_awesome_emotion_model
32
 
33
- This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the asvp_esd dataset.
34
  It achieves the following results on the evaluation set:
35
- - Loss: 1.7259
36
- - Accuracy: 0.4643
37
 
38
  ## Model description
39
 
@@ -67,21 +52,21 @@ The following hyperparameters were used during training:
67
 
68
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
69
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
70
- | 2.4474 | 0.98 | 47 | 2.3539 | 0.2567 |
71
- | 2.1378 | 1.99 | 95 | 2.1044 | 0.3530 |
72
- | 2.006 | 2.99 | 143 | 1.9574 | 0.3949 |
73
- | 1.8966 | 4.0 | 191 | 1.8966 | 0.4060 |
74
- | 1.851 | 4.98 | 238 | 1.8110 | 0.4348 |
75
- | 1.7784 | 5.99 | 286 | 1.7655 | 0.4486 |
76
- | 1.6856 | 6.99 | 334 | 1.7469 | 0.4650 |
77
- | 1.6076 | 8.0 | 382 | 1.7341 | 0.4558 |
78
- | 1.6216 | 8.98 | 429 | 1.7312 | 0.4617 |
79
- | 1.5692 | 9.84 | 470 | 1.7259 | 0.4643 |
80
 
81
 
82
  ### Framework versions
83
 
84
- - Transformers 4.35.2
85
- - Pytorch 2.1.0+cu118
86
- - Datasets 2.15.0
87
- - Tokenizers 0.15.0
 
3
  base_model: facebook/wav2vec2-base
4
  tags:
5
  - generated_from_trainer
 
 
6
  metrics:
7
  - accuracy
8
  model-index:
9
  - name: my_awesome_emotion_model
10
+ results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  ---
12
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
15
 
16
  # my_awesome_emotion_model
17
 
18
+ This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 1.6526
21
+ - Accuracy: 0.4938
22
 
23
  ## Model description
24
 
 
52
 
53
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
54
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
55
+ | 2.4362 | 0.98 | 47 | 2.3331 | 0.3124 |
56
+ | 2.1636 | 1.99 | 95 | 2.0618 | 0.3707 |
57
+ | 1.9869 | 2.99 | 143 | 1.9420 | 0.3870 |
58
+ | 1.8686 | 4.0 | 191 | 1.8900 | 0.3955 |
59
+ | 1.7885 | 4.98 | 238 | 1.7958 | 0.4414 |
60
+ | 1.6869 | 5.99 | 286 | 1.7295 | 0.4650 |
61
+ | 1.6069 | 6.99 | 334 | 1.7144 | 0.4748 |
62
+ | 1.5487 | 8.0 | 382 | 1.6685 | 0.4905 |
63
+ | 1.5576 | 8.98 | 429 | 1.6590 | 0.4964 |
64
+ | 1.4776 | 9.84 | 470 | 1.6526 | 0.4938 |
65
 
66
 
67
  ### Framework versions
68
 
69
+ - Transformers 4.38.2
70
+ - Pytorch 2.1.0+cu121
71
+ - Datasets 2.18.0
72
+ - Tokenizers 0.15.2
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:bdfce904dbff8f885f406691a02eeee4cce5ad271c78a9aa2ca7f30f7a44fd8b
3
  size 378313676
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1657c00bd0240a2cfcc5a1269ea15a756d3d01e743c5b0ce995592f6d1b07242
3
  size 378313676