Edit model card

flan-t5-base-samsum

This model is a fine-tuned version of google/flan-t5-base on the samsum dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3718
  • Rouge1: 47.4238
  • Rouge2: 23.5718
  • Rougel: 39.9102
  • Rougelsum: 43.5465
  • Gen Len: 17.2540

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
1.4553 1.0 1842 1.3885 46.3203 22.8542 38.7501 42.6178 17.5177
1.3375 2.0 3684 1.3746 47.1463 23.6451 39.7442 43.5545 17.2418
1.2785 3.0 5526 1.3718 47.4238 23.5718 39.9102 43.5465 17.2540
1.228 4.0 7368 1.3762 47.5418 24.051 40.0543 43.7907 17.3468
1.2045 5.0 9210 1.3767 47.3198 23.7145 39.7772 43.5547 17.2894

Framework versions

  • Transformers 4.43.3
  • Pytorch 2.0.1
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
4
Safetensors
Model size
248M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for palak2111/flan-t5-base-samsum

Finetuned
(628)
this model

Dataset used to train palak2111/flan-t5-base-samsum

Evaluation results