silmi224 commited on
Commit
b487aa7
1 Parent(s): a5b5433

Training complete

Browse files
README.md ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: silmi224/finetune-led-35000
3
+ tags:
4
+ - summarization
5
+ - generated_from_trainer
6
+ metrics:
7
+ - rouge
8
+ model-index:
9
+ - name: led-risalah_data_v17_3
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ # led-risalah_data_v17_3
17
+
18
+ This model is a fine-tuned version of [silmi224/finetune-led-35000](https://huggingface.co/silmi224/finetune-led-35000) on an unknown dataset.
19
+ It achieves the following results on the evaluation set:
20
+ - Loss: 1.6551
21
+ - Rouge1: 25.33
22
+ - Rouge2: 12.4758
23
+ - Rougel: 18.3801
24
+ - Rougelsum: 24.0275
25
+
26
+ ## Model description
27
+
28
+ More information needed
29
+
30
+ ## Intended uses & limitations
31
+
32
+ More information needed
33
+
34
+ ## Training and evaluation data
35
+
36
+ More information needed
37
+
38
+ ## Training procedure
39
+
40
+ ### Training hyperparameters
41
+
42
+ The following hyperparameters were used during training:
43
+ - learning_rate: 1e-05
44
+ - train_batch_size: 1
45
+ - eval_batch_size: 1
46
+ - seed: 42
47
+ - gradient_accumulation_steps: 4
48
+ - total_train_batch_size: 4
49
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
50
+ - lr_scheduler_type: linear
51
+ - lr_scheduler_warmup_steps: 200
52
+ - num_epochs: 10
53
+ - mixed_precision_training: Native AMP
54
+
55
+ ### Training results
56
+
57
+ | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
58
+ |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
59
+ | 2.9826 | 1.0 | 20 | 2.5250 | 11.8736 | 3.4553 | 8.0701 | 10.4233 |
60
+ | 2.5516 | 2.0 | 40 | 2.2001 | 15.7664 | 5.0213 | 10.8555 | 14.1975 |
61
+ | 2.2334 | 3.0 | 60 | 2.0424 | 17.0425 | 6.006 | 10.956 | 15.2795 |
62
+ | 1.9577 | 4.0 | 80 | 1.9305 | 19.1792 | 7.6754 | 12.651 | 17.7519 |
63
+ | 1.8602 | 5.0 | 100 | 1.8351 | 22.4846 | 8.3095 | 14.0022 | 20.587 |
64
+ | 1.702 | 6.0 | 120 | 1.7809 | 21.9395 | 8.5042 | 14.9427 | 20.3436 |
65
+ | 1.6525 | 7.0 | 140 | 1.7286 | 23.7825 | 10.9231 | 15.9319 | 22.0902 |
66
+ | 1.5285 | 8.0 | 160 | 1.6839 | 24.1286 | 11.2382 | 16.7057 | 22.3731 |
67
+ | 1.4623 | 9.0 | 180 | 1.6644 | 23.8767 | 12.3834 | 17.5761 | 22.6869 |
68
+ | 1.4175 | 10.0 | 200 | 1.6551 | 25.33 | 12.4758 | 18.3801 | 24.0275 |
69
+
70
+
71
+ ### Framework versions
72
+
73
+ - Transformers 4.41.2
74
+ - Pytorch 2.1.2
75
+ - Datasets 2.19.2
76
+ - Tokenizers 0.19.1
generation_config.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 0,
3
+ "decoder_start_token_id": 2,
4
+ "early_stopping": true,
5
+ "eos_token_id": 2,
6
+ "length_penalty": 2.0,
7
+ "max_length": 128,
8
+ "min_length": 40,
9
+ "no_repeat_ngram_size": 3,
10
+ "num_beams": 2,
11
+ "pad_token_id": 1,
12
+ "transformers_version": "4.41.2",
13
+ "use_cache": false
14
+ }
runs/Jul14_13-01-21_a43527d48c8d/events.out.tfevents.1720962109.a43527d48c8d.34.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3aa7145b781a7620352ad3ed7a52872ccc1a2bdcf7f5be64acae284c8e5a601a
3
- size 14044
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e25d83d96554610fc29ab18c36e52815e18e99d254603825b618e4898f04e693
3
+ size 14872
runs/Jul14_13-01-21_a43527d48c8d/events.out.tfevents.1720965574.a43527d48c8d.34.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:55164c4240120ee350b30a3ffd10352abf74b5ba9eead5bff6c646bfcc5e4da8
3
+ size 562