update model card README.md
Browse files
README.md
CHANGED
@@ -1,6 +1,5 @@
|
|
1 |
---
|
2 |
-
|
3 |
-
- en
|
4 |
tags:
|
5 |
- generated_from_trainer
|
6 |
datasets:
|
@@ -14,13 +13,15 @@ model-index:
|
|
14 |
name: Text Classification
|
15 |
type: text-classification
|
16 |
dataset:
|
17 |
-
name:
|
18 |
type: glue
|
|
|
|
|
19 |
args: cola
|
20 |
metrics:
|
21 |
- name: Matthews Correlation
|
22 |
type: matthews_correlation
|
23 |
-
value: 0.
|
24 |
---
|
25 |
|
26 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
@@ -28,10 +29,10 @@ should probably proofread and complete it, then remove this comment. -->
|
|
28 |
|
29 |
# mobilebert_sa_GLUE_Experiment_cola
|
30 |
|
31 |
-
This model is a fine-tuned version of [](https://huggingface.co/) on the
|
32 |
It achieves the following results on the evaluation set:
|
33 |
-
- Loss: 0.
|
34 |
-
- Matthews Correlation: 0.
|
35 |
|
36 |
## Model description
|
37 |
|
@@ -51,8 +52,8 @@ More information needed
|
|
51 |
|
52 |
The following hyperparameters were used during training:
|
53 |
- learning_rate: 5e-05
|
54 |
-
- train_batch_size:
|
55 |
-
- eval_batch_size:
|
56 |
- seed: 10
|
57 |
- distributed_type: multi-GPU
|
58 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
@@ -64,23 +65,20 @@ The following hyperparameters were used during training:
|
|
64 |
|
65 |
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|
66 |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
|
67 |
-
| 0.
|
68 |
-
| 0.6078 | 2.0 |
|
69 |
-
| 0.
|
70 |
-
| 0.
|
71 |
-
| 0.
|
72 |
-
| 0.
|
73 |
-
| 0.
|
74 |
-
| 0.
|
75 |
-
| 0.
|
76 |
-
| 0.4884 | 10.0 | 340 | 0.7010 | 0.0801 |
|
77 |
-
| 0.4559 | 11.0 | 374 | 0.6731 | 0.0905 |
|
78 |
-
| 0.4367 | 12.0 | 408 | 0.6893 | 0.0901 |
|
79 |
|
80 |
|
81 |
### Framework versions
|
82 |
|
83 |
-
- Transformers 4.
|
84 |
- Pytorch 1.14.0a0+410ce96
|
85 |
- Datasets 2.8.0
|
86 |
- Tokenizers 0.13.2
|
|
|
1 |
---
|
2 |
+
license: apache-2.0
|
|
|
3 |
tags:
|
4 |
- generated_from_trainer
|
5 |
datasets:
|
|
|
13 |
name: Text Classification
|
14 |
type: text-classification
|
15 |
dataset:
|
16 |
+
name: glue
|
17 |
type: glue
|
18 |
+
config: cola
|
19 |
+
split: validation
|
20 |
args: cola
|
21 |
metrics:
|
22 |
- name: Matthews Correlation
|
23 |
type: matthews_correlation
|
24 |
+
value: 0.08118499547243287
|
25 |
---
|
26 |
|
27 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
|
|
29 |
|
30 |
# mobilebert_sa_GLUE_Experiment_cola
|
31 |
|
32 |
+
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the glue dataset.
|
33 |
It achieves the following results on the evaluation set:
|
34 |
+
- Loss: 0.6915
|
35 |
+
- Matthews Correlation: 0.0812
|
36 |
|
37 |
## Model description
|
38 |
|
|
|
52 |
|
53 |
The following hyperparameters were used during training:
|
54 |
- learning_rate: 5e-05
|
55 |
+
- train_batch_size: 128
|
56 |
+
- eval_batch_size: 128
|
57 |
- seed: 10
|
58 |
- distributed_type: multi-GPU
|
59 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
|
|
65 |
|
66 |
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|
67 |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
|
68 |
+
| 0.6122 | 1.0 | 67 | 0.6184 | 0.0 |
|
69 |
+
| 0.6078 | 2.0 | 134 | 0.6180 | 0.0 |
|
70 |
+
| 0.607 | 3.0 | 201 | 0.6185 | 0.0 |
|
71 |
+
| 0.6052 | 4.0 | 268 | 0.6153 | 0.0 |
|
72 |
+
| 0.5822 | 5.0 | 335 | 0.6292 | 0.0506 |
|
73 |
+
| 0.5193 | 6.0 | 402 | 0.6422 | 0.0743 |
|
74 |
+
| 0.4783 | 7.0 | 469 | 0.7020 | 0.0629 |
|
75 |
+
| 0.4504 | 8.0 | 536 | 0.7422 | 0.0834 |
|
76 |
+
| 0.4315 | 9.0 | 603 | 0.6915 | 0.0812 |
|
|
|
|
|
|
|
77 |
|
78 |
|
79 |
### Framework versions
|
80 |
|
81 |
+
- Transformers 4.26.0
|
82 |
- Pytorch 1.14.0a0+410ce96
|
83 |
- Datasets 2.8.0
|
84 |
- Tokenizers 0.13.2
|