abymmathew's picture
End of training
2a1f257 verified
metadata
license: apache-2.0
base_model: t5-small
tags:
  - generated_from_trainer
metrics:
  - rouge
model-index:
  - name: synthea_t5_summarization_model
    results: []

synthea_t5_summarization_model

This model is a fine-tuned version of t5-small on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.2906
  • Rouge1: 0.4543
  • Rouge2: 0.137
  • Rougel: 0.4022
  • Rougelsum: 0.4025
  • Gen Len: 11.1279

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 40
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
No log 1.0 22 1.9114 0.289 0.0651 0.2709 0.2713 9.6279
No log 2.0 44 1.8289 0.3265 0.0822 0.2925 0.2935 10.2093
No log 3.0 66 1.7603 0.3617 0.107 0.3224 0.323 10.9651
No log 4.0 88 1.7081 0.3505 0.1017 0.3142 0.3146 11.7674
No log 5.0 110 1.6651 0.3497 0.0935 0.3055 0.3066 11.9419
No log 6.0 132 1.6200 0.3701 0.1022 0.3339 0.3348 11.7209
No log 7.0 154 1.5865 0.3726 0.1045 0.3328 0.3336 11.6279
No log 8.0 176 1.5552 0.3802 0.1049 0.3412 0.3417 11.4419
No log 9.0 198 1.5237 0.3982 0.115 0.3519 0.3533 11.3721
No log 10.0 220 1.4836 0.41 0.1188 0.3643 0.3645 11.4767
No log 11.0 242 1.4708 0.391 0.1142 0.3492 0.3491 11.6977
No log 12.0 264 1.4429 0.4157 0.1184 0.3689 0.3687 11.1977
No log 13.0 286 1.4312 0.4229 0.1204 0.3738 0.3741 11.0698
No log 14.0 308 1.4162 0.4231 0.1361 0.3806 0.3805 11.0465
No log 15.0 330 1.4011 0.4341 0.1406 0.3856 0.386 10.8953
No log 16.0 352 1.3877 0.439 0.1373 0.3942 0.3952 11.407
No log 17.0 374 1.3794 0.4488 0.1442 0.3987 0.3997 11.0581
No log 18.0 396 1.3673 0.4445 0.1418 0.3972 0.3979 11.186
No log 19.0 418 1.3581 0.4529 0.1375 0.4037 0.4047 11.1279
No log 20.0 440 1.3515 0.4378 0.1216 0.3921 0.3921 11.0
No log 21.0 462 1.3430 0.4533 0.1344 0.3996 0.4012 10.6512
No log 22.0 484 1.3390 0.4489 0.1426 0.4041 0.4042 10.8023
1.8003 23.0 506 1.3341 0.4444 0.1359 0.3986 0.3992 10.7674
1.8003 24.0 528 1.3266 0.4525 0.1357 0.4058 0.4059 10.9186
1.8003 25.0 550 1.3290 0.4517 0.1304 0.4024 0.4027 10.7209
1.8003 26.0 572 1.3217 0.4486 0.1405 0.402 0.402 11.4186
1.8003 27.0 594 1.3194 0.4484 0.1383 0.4004 0.401 11.1279
1.8003 28.0 616 1.3158 0.4407 0.1284 0.3946 0.395 11.4302
1.8003 29.0 638 1.3111 0.4457 0.1294 0.3974 0.397 11.2558
1.8003 30.0 660 1.3075 0.4502 0.132 0.3988 0.398 11.0581
1.8003 31.0 682 1.3045 0.4482 0.1328 0.3965 0.3963 11.0698
1.8003 32.0 704 1.3012 0.4492 0.1315 0.3978 0.3971 11.093
1.8003 33.0 726 1.2988 0.4426 0.1294 0.3922 0.3923 11.2326
1.8003 34.0 748 1.2978 0.451 0.1342 0.3992 0.3998 11.1512
1.8003 35.0 770 1.2980 0.4556 0.1386 0.4062 0.4069 11.0698
1.8003 36.0 792 1.2946 0.4578 0.1387 0.4063 0.4062 11.0581
1.8003 37.0 814 1.2921 0.4549 0.138 0.4031 0.4031 11.1047
1.8003 38.0 836 1.2910 0.4531 0.1362 0.4014 0.4017 11.1512
1.8003 39.0 858 1.2907 0.4531 0.1362 0.4014 0.4017 11.0814
1.8003 40.0 880 1.2906 0.4543 0.137 0.4022 0.4025 11.1279

Framework versions

  • Transformers 4.38.2
  • Pytorch 2.2.1+cu121
  • Datasets 2.19.0
  • Tokenizers 0.15.2