tiagoblima commited on
Commit
ab362e5
1 Parent(s): 2ad7ddc

End of training

Browse files
README.md CHANGED
@@ -3,6 +3,8 @@ license: mit
3
  base_model: unicamp-dl/ptt5-small-t5-vocab
4
  tags:
5
  - generated_from_trainer
 
 
6
  model-index:
7
  - name: t5_small-qg-ctx-a
8
  results: []
@@ -13,7 +15,9 @@ should probably proofread and complete it, then remove this comment. -->
13
 
14
  # t5_small-qg-ctx-a
15
 
16
- This model is a fine-tuned version of [unicamp-dl/ptt5-small-t5-vocab](https://huggingface.co/unicamp-dl/ptt5-small-t5-vocab) on an unknown dataset.
 
 
17
 
18
  ## Model description
19
 
 
3
  base_model: unicamp-dl/ptt5-small-t5-vocab
4
  tags:
5
  - generated_from_trainer
6
+ datasets:
7
+ - tiagoblima/qg_squad_v1_pt
8
  model-index:
9
  - name: t5_small-qg-ctx-a
10
  results: []
 
15
 
16
  # t5_small-qg-ctx-a
17
 
18
+ This model is a fine-tuned version of [unicamp-dl/ptt5-small-t5-vocab](https://huggingface.co/unicamp-dl/ptt5-small-t5-vocab) on the tiagoblima/qg_squad_v1_pt dataset.
19
+ It achieves the following results on the evaluation set:
20
+ - Loss: 1.5839
21
 
22
  ## Model description
23
 
all_results.json CHANGED
@@ -1,13 +1,13 @@
1
  {
2
  "epoch": 2.0,
3
  "eval_loss": 1.5838885307312012,
4
- "eval_runtime": 78.2351,
5
  "eval_samples": 8890,
6
- "eval_samples_per_second": 113.632,
7
- "eval_steps_per_second": 14.214,
8
  "train_loss": 1.563031800902716,
9
- "train_runtime": 2367.9689,
10
  "train_samples": 51704,
11
- "train_samples_per_second": 43.669,
12
- "train_steps_per_second": 0.682
13
  }
 
1
  {
2
  "epoch": 2.0,
3
  "eval_loss": 1.5838885307312012,
4
+ "eval_runtime": 77.8366,
5
  "eval_samples": 8890,
6
+ "eval_samples_per_second": 114.214,
7
+ "eval_steps_per_second": 14.286,
8
  "train_loss": 1.563031800902716,
9
+ "train_runtime": 2354.5427,
10
  "train_samples": 51704,
11
+ "train_samples_per_second": 43.919,
12
+ "train_steps_per_second": 0.686
13
  }
eval_results.json CHANGED
@@ -1,8 +1,8 @@
1
  {
2
  "epoch": 2.0,
3
  "eval_loss": 1.5838885307312012,
4
- "eval_runtime": 78.2351,
5
  "eval_samples": 8890,
6
- "eval_samples_per_second": 113.632,
7
- "eval_steps_per_second": 14.214
8
  }
 
1
  {
2
  "epoch": 2.0,
3
  "eval_loss": 1.5838885307312012,
4
+ "eval_runtime": 77.8366,
5
  "eval_samples": 8890,
6
+ "eval_samples_per_second": 114.214,
7
+ "eval_steps_per_second": 14.286
8
  }
train_results.json CHANGED
@@ -1,8 +1,8 @@
1
  {
2
  "epoch": 2.0,
3
  "train_loss": 1.563031800902716,
4
- "train_runtime": 2367.9689,
5
  "train_samples": 51704,
6
- "train_samples_per_second": 43.669,
7
- "train_steps_per_second": 0.682
8
  }
 
1
  {
2
  "epoch": 2.0,
3
  "train_loss": 1.563031800902716,
4
+ "train_runtime": 2354.5427,
5
  "train_samples": 51704,
6
+ "train_samples_per_second": 43.919,
7
+ "train_steps_per_second": 0.686
8
  }
trainer_state.json CHANGED
@@ -31,9 +31,9 @@
31
  "step": 1616,
32
  "total_flos": 1.0496568754962432e+16,
33
  "train_loss": 1.563031800902716,
34
- "train_runtime": 2367.9689,
35
- "train_samples_per_second": 43.669,
36
- "train_steps_per_second": 0.682
37
  }
38
  ],
39
  "logging_steps": 500,
 
31
  "step": 1616,
32
  "total_flos": 1.0496568754962432e+16,
33
  "train_loss": 1.563031800902716,
34
+ "train_runtime": 2354.5427,
35
+ "train_samples_per_second": 43.919,
36
+ "train_steps_per_second": 0.686
37
  }
38
  ],
39
  "logging_steps": 500,