tiagoblima commited on
Commit
3d93a99
1 Parent(s): 95318b4

Model save

Browse files
Files changed (1) hide show
  1. README.md +10 -12
README.md CHANGED
@@ -3,8 +3,6 @@ license: mit
3
  base_model: unicamp-dl/ptt5-large-t5-vocab
4
  tags:
5
  - generated_from_trainer
6
- datasets:
7
- - tiagoblima/qg_squad_v1_pt
8
  model-index:
9
  - name: t5_large-qg-aap
10
  results: []
@@ -15,9 +13,9 @@ should probably proofread and complete it, then remove this comment. -->
15
 
16
  # t5_large-qg-aap
17
 
18
- This model is a fine-tuned version of [unicamp-dl/ptt5-large-t5-vocab](https://huggingface.co/unicamp-dl/ptt5-large-t5-vocab) on the tiagoblima/qg_squad_v1_pt dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 7.4208
21
 
22
  ## Model description
23
 
@@ -36,9 +34,9 @@ More information needed
36
  ### Training hyperparameters
37
 
38
  The following hyperparameters were used during training:
39
- - learning_rate: 0.003
40
- - train_batch_size: 128
41
- - eval_batch_size: 2
42
  - seed: 42
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: linear
@@ -48,11 +46,11 @@ The following hyperparameters were used during training:
48
 
49
  | Training Loss | Epoch | Step | Validation Loss |
50
  |:-------------:|:-----:|:----:|:---------------:|
51
- | 7.5413 | 1.0 | 404 | 8.8502 |
52
- | 6.7965 | 2.0 | 808 | 8.1184 |
53
- | 6.3963 | 3.0 | 1212 | 7.6950 |
54
- | 6.1664 | 4.0 | 1616 | 7.4855 |
55
- | 6.1028 | 5.0 | 2020 | 7.4208 |
56
 
57
 
58
  ### Framework versions
 
3
  base_model: unicamp-dl/ptt5-large-t5-vocab
4
  tags:
5
  - generated_from_trainer
 
 
6
  model-index:
7
  - name: t5_large-qg-aap
8
  results: []
 
13
 
14
  # t5_large-qg-aap
15
 
16
+ This model is a fine-tuned version of [unicamp-dl/ptt5-large-t5-vocab](https://huggingface.co/unicamp-dl/ptt5-large-t5-vocab) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
+ - Loss: 5.5901
19
 
20
  ## Model description
21
 
 
34
  ### Training hyperparameters
35
 
36
  The following hyperparameters were used during training:
37
+ - learning_rate: 0.005
38
+ - train_batch_size: 64
39
+ - eval_batch_size: 4
40
  - seed: 42
41
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
  - lr_scheduler_type: linear
 
46
 
47
  | Training Loss | Epoch | Step | Validation Loss |
48
  |:-------------:|:-----:|:----:|:---------------:|
49
+ | 6.15 | 1.0 | 808 | 7.3361 |
50
+ | 5.3335 | 2.0 | 1616 | 6.4092 |
51
+ | 4.8807 | 3.0 | 2424 | 5.9132 |
52
+ | 4.6492 | 4.0 | 3232 | 5.6656 |
53
+ | 4.591 | 5.0 | 4040 | 5.5901 |
54
 
55
 
56
  ### Framework versions