Edit model card

distilbart-cnn-12-6-finetuned-samsum

This model is a fine-tuned version of sshleifer/distilbart-cnn-12-6 on the samsum dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5040
  • Rouge1: 41.0557
  • Rouge2: 20.8627
  • Rougel: 31.6375
  • Rougelsum: 38.3023

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum
0.5843 1.0 921 0.5095 40.4545 21.2232 31.2992 37.9698
0.4562 2.0 1842 0.5010 40.9057 21.0576 31.4701 38.2105
0.3938 3.0 2763 0.5040 41.0557 20.8627 31.6375 38.3023

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.1+cu121
  • Datasets 3.0.1
  • Tokenizers 0.19.1
Downloads last month
4
Safetensors
Model size
306M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for anitageorge/distilbart-cnn-12-6-finetuned-samsum

Finetuned
(26)
this model

Dataset used to train anitageorge/distilbart-cnn-12-6-finetuned-samsum

Evaluation results