raygx commited on
Commit
0182711
1 Parent(s): 25fcc9e

Upload TFBertForSequenceClassification

Browse files
Files changed (3) hide show
  1. README.md +5 -13
  2. config.json +1 -1
  3. tf_model.h5 +1 -1
README.md CHANGED
@@ -1,5 +1,6 @@
1
  ---
2
  license: mit
 
3
  tags:
4
  - generated_from_keras_callback
5
  model-index:
@@ -14,9 +15,7 @@ probably proofread and complete it, then remove this comment. -->
14
 
15
  This model is a fine-tuned version of [Shushant/nepaliBERT](https://huggingface.co/Shushant/nepaliBERT) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
- - Train Loss: 0.5775
18
- - Validation Loss: 0.6255
19
- - Epoch: 4
20
 
21
  ## Model description
22
 
@@ -35,23 +34,16 @@ More information needed
35
  ### Training hyperparameters
36
 
37
  The following hyperparameters were used during training:
38
- - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-06, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.001}
39
  - training_precision: float32
40
 
41
  ### Training results
42
 
43
- | Train Loss | Validation Loss | Epoch |
44
- |:----------:|:---------------:|:-----:|
45
- | 0.8096 | 0.7200 | 0 |
46
- | 0.6789 | 0.6691 | 1 |
47
- | 0.6341 | 0.6525 | 2 |
48
- | 0.6028 | 0.6266 | 3 |
49
- | 0.5775 | 0.6255 | 4 |
50
 
51
 
52
  ### Framework versions
53
 
54
- - Transformers 4.29.2
55
  - TensorFlow 2.12.0
56
- - Datasets 2.12.0
57
  - Tokenizers 0.13.3
 
1
  ---
2
  license: mit
3
+ base_model: Shushant/nepaliBERT
4
  tags:
5
  - generated_from_keras_callback
6
  model-index:
 
15
 
16
  This model is a fine-tuned version of [Shushant/nepaliBERT](https://huggingface.co/Shushant/nepaliBERT) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
+
 
 
19
 
20
  ## Model description
21
 
 
34
  ### Training hyperparameters
35
 
36
  The following hyperparameters were used during training:
37
+ - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-06, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.0001}
38
  - training_precision: float32
39
 
40
  ### Training results
41
 
 
 
 
 
 
 
 
42
 
43
 
44
  ### Framework versions
45
 
46
+ - Transformers 4.31.0
47
  - TensorFlow 2.12.0
48
+ - Datasets 2.13.1
49
  - Tokenizers 0.13.3
config.json CHANGED
@@ -28,7 +28,7 @@
28
  "pad_token_id": 0,
29
  "position_embedding_type": "absolute",
30
  "torch_dtype": "float32",
31
- "transformers_version": "4.29.2",
32
  "type_vocab_size": 2,
33
  "use_cache": true,
34
  "vocab_size": 30522
 
28
  "pad_token_id": 0,
29
  "position_embedding_type": "absolute",
30
  "torch_dtype": "float32",
31
+ "transformers_version": "4.31.0",
32
  "type_vocab_size": 2,
33
  "use_cache": true,
34
  "vocab_size": 30522
tf_model.h5 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:dec01be38fb7f7166034dbc4ec529597c582684d168ed3dbecc5936e6c304635
3
  size 438226204
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:587869e988bb65b06ae2bd664df30a331f2ef2e28af131d05b987e77fc7f9cb6
3
  size 438226204