Edit model card

pubhealth-expanded-2

This model is a fine-tuned version of facebook/bart-large on the clupubhealth dataset. It achieves the following results on the evaluation set:

  • Loss: 2.0350
  • Rouge1: 30.8894
  • Rouge2: 11.1867
  • Rougel: 23.9147
  • Rougelsum: 24.1629
  • Gen Len: 19.92

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
2.8605 0.15 300 2.1208 28.8603 10.8357 23.0659 23.2449 19.91
2.7424 0.31 600 2.0667 31.4167 11.8643 24.7631 25.1062 19.83
2.6133 0.46 900 2.0508 30.8362 11.7188 23.8637 24.0363 19.92
2.5378 0.62 1200 2.0295 32.2237 12.4404 25.5336 25.847 19.875
2.5218 0.77 1500 2.0379 32.0398 11.9383 25.0801 25.2798 19.9
2.4902 0.93 1800 2.0350 30.8894 11.1867 23.9147 24.1629 19.92

Framework versions

  • Transformers 4.31.0
  • Pytorch 2.0.1+cu117
  • Datasets 2.7.1
  • Tokenizers 0.13.2
Downloads last month
5
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for zwellington/pubhealth-expanded-2

Finetuned
(139)
this model

Evaluation results