--- license: apache-2.0 tags: - summarization - generated_from_trainer datasets: - samsum metrics: - rouge model-index: - name: t5-small-finetuned-samsum-en results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: samsum type: samsum args: samsum metrics: - name: Rouge1 type: rouge value: 42.3215 --- # t5-small-finetuned-samsum-en This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the samsum dataset. It achieves the following results on the evaluation set: - Loss: 1.7863 - Rouge1: 42.3215 - Rouge2: 19.4644 - Rougel: 35.3715 - Rougelsum: 39.1274 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:| | 2.2448 | 1.0 | 300 | 1.8993 | 39.5059 | 17.0654 | 32.9974 | 36.6153 | | 2.0428 | 2.0 | 600 | 1.8499 | 40.0529 | 17.4367 | 33.4804 | 37.057 | | 1.9626 | 3.0 | 900 | 1.8278 | 40.7994 | 17.918 | 34.0773 | 37.6219 | | 1.8992 | 4.0 | 1200 | 1.8118 | 41.3782 | 18.5579 | 34.7794 | 38.4994 | | 1.8429 | 5.0 | 1500 | 1.8006 | 41.8624 | 18.7592 | 34.9262 | 38.7019 | | 1.8057 | 6.0 | 1800 | 1.7988 | 41.1316 | 18.5242 | 34.7271 | 38.2821 | | 1.775 | 7.0 | 2100 | 1.7856 | 42.2036 | 19.3343 | 35.4442 | 39.2114 | | 1.7376 | 8.0 | 2400 | 1.7797 | 41.9569 | 18.9482 | 35.1953 | 38.7609 | | 1.7096 | 9.0 | 2700 | 1.7780 | 42.6065 | 19.2152 | 35.4563 | 39.2736 | | 1.6885 | 10.0 | 3000 | 1.7826 | 42.1595 | 18.8477 | 34.8679 | 38.9388 | | 1.6581 | 11.0 | 3300 | 1.7809 | 42.291 | 19.0846 | 35.1938 | 38.894 | | 1.6392 | 12.0 | 3600 | 1.7824 | 42.3588 | 19.4507 | 35.4588 | 39.2067 | | 1.6258 | 13.0 | 3900 | 1.7806 | 42.0932 | 19.002 | 35.0112 | 38.8053 | | 1.6042 | 14.0 | 4200 | 1.7828 | 42.0564 | 19.3141 | 35.2479 | 38.8301 | | 1.5993 | 15.0 | 4500 | 1.7824 | 42.6056 | 19.5164 | 35.4112 | 39.2322 | | 1.5869 | 16.0 | 4800 | 1.7839 | 42.1505 | 19.1529 | 35.0853 | 38.8788 | | 1.5778 | 17.0 | 5100 | 1.7827 | 42.5416 | 19.5103 | 35.5507 | 39.293 | | 1.5716 | 18.0 | 5400 | 1.7865 | 42.3028 | 19.3783 | 35.3466 | 39.0594 | | 1.5615 | 19.0 | 5700 | 1.7857 | 42.4001 | 19.5111 | 35.4686 | 39.1614 | | 1.5606 | 20.0 | 6000 | 1.7863 | 42.3215 | 19.4644 | 35.3715 | 39.1274 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1