Edit model card

text_shortening_model_v5

This model is a fine-tuned version of t5-small on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3950
  • Rouge1: 0.6032
  • Rouge2: 0.3745
  • Rougel: 0.5559
  • Rougelsum: 0.556
  • Bert precision: 0.8961
  • Bert recall: 0.9059
  • Average word count: 11.4071
  • Max word count: 16
  • Min word count: 6
  • Average token count: 16.7643

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Bert precision Bert recall Average word count Max word count Min word count Average token count
1.311 1.0 8 1.8181 0.5439 0.3249 0.4963 0.4961 0.879 0.8847 11.65 18 1 16.8857
1.174 2.0 16 1.6800 0.55 0.3147 0.4935 0.4931 0.8779 0.8891 12.1214 18 5 17.2857
1.1265 3.0 24 1.6149 0.5642 0.3349 0.5109 0.5105 0.8833 0.8935 11.8643 18 5 16.9571
1.1075 4.0 32 1.5730 0.5657 0.3383 0.5163 0.5161 0.8836 0.8961 11.9643 18 4 17.0929
1.062 5.0 40 1.5421 0.5819 0.3544 0.53 0.5292 0.8858 0.9007 12.1286 18 5 17.2571
1.021 6.0 48 1.5085 0.5792 0.3514 0.5262 0.5255 0.8848 0.8986 11.9929 18 5 17.1
0.998 7.0 56 1.4826 0.5825 0.3548 0.5335 0.5317 0.887 0.9 11.8357 18 6 17.0857
0.9794 8.0 64 1.4659 0.5814 0.3508 0.5306 0.5297 0.8877 0.8993 11.6714 18 4 16.9286
0.9553 9.0 72 1.4533 0.5871 0.3545 0.533 0.5318 0.8874 0.9018 11.8857 18 6 17.2071
0.9451 10.0 80 1.4402 0.5871 0.3604 0.5368 0.5361 0.8889 0.9013 11.6571 18 6 16.9929
0.9223 11.0 88 1.4334 0.5888 0.3602 0.5378 0.5369 0.8883 0.9017 11.8071 18 6 17.1643
0.893 12.0 96 1.4295 0.587 0.3589 0.5367 0.5356 0.8878 0.9008 11.8 18 6 17.1214
0.8768 13.0 104 1.4182 0.5887 0.3598 0.5395 0.5388 0.8887 0.9021 11.8571 17 6 17.2429
0.8598 14.0 112 1.4076 0.5937 0.3647 0.5476 0.5466 0.8909 0.9021 11.6214 16 6 16.9429
0.8555 15.0 120 1.4080 0.5948 0.3668 0.5481 0.5473 0.89 0.9018 11.6786 16 6 17.0429
0.8505 16.0 128 1.4067 0.5984 0.3705 0.5517 0.5507 0.8908 0.9031 11.7214 17 6 17.0714
0.8545 17.0 136 1.3995 0.5946 0.3669 0.5479 0.547 0.8924 0.9028 11.55 15 6 16.9071
0.8025 18.0 144 1.3953 0.5935 0.3637 0.547 0.5461 0.8924 0.9022 11.5571 15 6 16.8929
0.7915 19.0 152 1.3975 0.5963 0.3702 0.5485 0.5476 0.8899 0.9025 11.7714 17 6 17.1929
0.8017 20.0 160 1.3957 0.5915 0.3633 0.5439 0.542 0.8897 0.902 11.7143 17 6 17.1643
0.8133 21.0 168 1.3926 0.5932 0.3632 0.5438 0.5425 0.8916 0.9022 11.5714 16 6 16.9786
0.7858 22.0 176 1.3942 0.5941 0.3658 0.5453 0.544 0.8915 0.9022 11.5714 16 6 16.9857
0.7712 23.0 184 1.3929 0.6015 0.3698 0.5506 0.5498 0.8916 0.9044 11.7714 16 6 17.1786
0.7786 24.0 192 1.3900 0.5985 0.3662 0.549 0.5482 0.8926 0.903 11.5286 16 6 16.8857
0.7707 25.0 200 1.3888 0.6011 0.3708 0.5508 0.5495 0.8947 0.9037 11.3786 15 6 16.7286
0.7661 26.0 208 1.3888 0.6001 0.3704 0.5512 0.55 0.8943 0.9033 11.4429 15 6 16.8
0.7489 27.0 216 1.3892 0.5953 0.3673 0.5467 0.5462 0.8927 0.9017 11.4429 15 6 16.7929
0.7433 28.0 224 1.3910 0.5925 0.3661 0.5449 0.5449 0.8927 0.9023 11.4714 15 6 16.9
0.7295 29.0 232 1.3886 0.5934 0.3656 0.5458 0.5451 0.893 0.9019 11.4929 15 6 16.8429
0.7446 30.0 240 1.3874 0.5947 0.3643 0.5474 0.5471 0.893 0.9017 11.4929 15 6 16.7786
0.7318 31.0 248 1.3848 0.5998 0.3708 0.5518 0.5517 0.8946 0.9029 11.5 15 6 16.7714
0.7279 32.0 256 1.3851 0.6003 0.3703 0.5522 0.5522 0.8948 0.9035 11.5214 15 6 16.7929
0.725 33.0 264 1.3879 0.5979 0.3677 0.5487 0.5476 0.8956 0.9046 11.4643 15 6 16.7214
0.7229 34.0 272 1.3907 0.5959 0.3677 0.5463 0.5457 0.8948 0.904 11.5286 15 6 16.8143
0.7228 35.0 280 1.3916 0.5983 0.3696 0.5499 0.5491 0.8947 0.9047 11.5857 15 6 16.8714
0.7006 36.0 288 1.3913 0.5962 0.3681 0.5461 0.5454 0.8938 0.9036 11.5571 15 6 16.8286
0.6935 37.0 296 1.3891 0.5976 0.3707 0.55 0.5496 0.895 0.9042 11.3786 15 6 16.6857
0.7011 38.0 304 1.3894 0.602 0.3727 0.5546 0.554 0.8965 0.9059 11.4429 16 6 16.6929
0.7188 39.0 312 1.3903 0.6031 0.373 0.5556 0.5548 0.896 0.9061 11.5357 16 6 16.7929
0.7013 40.0 320 1.3927 0.6055 0.3763 0.5573 0.5564 0.8952 0.906 11.5929 16 6 16.8929
0.6857 41.0 328 1.3932 0.5991 0.3729 0.5509 0.5514 0.894 0.9054 11.5357 16 6 16.8857
0.7063 42.0 336 1.3933 0.5995 0.3739 0.5514 0.5513 0.8943 0.9056 11.5571 16 6 16.8571
0.7022 43.0 344 1.3935 0.5974 0.3714 0.55 0.5503 0.894 0.9052 11.55 16 6 16.8714
0.6975 44.0 352 1.3937 0.6008 0.369 0.5519 0.5516 0.8949 0.905 11.5286 16 6 16.8071
0.687 45.0 360 1.3937 0.6024 0.3705 0.5536 0.5534 0.8955 0.9053 11.4929 16 6 16.7786
0.7044 46.0 368 1.3944 0.6024 0.3718 0.5545 0.5543 0.8957 0.9054 11.4643 16 6 16.7714
0.695 47.0 376 1.3947 0.6037 0.3746 0.5558 0.5556 0.896 0.9059 11.45 16 6 16.7857
0.7019 48.0 384 1.3949 0.6047 0.3756 0.5575 0.5572 0.896 0.9058 11.4357 16 6 16.7643
0.6895 49.0 392 1.3950 0.6032 0.3745 0.5559 0.556 0.8961 0.9059 11.4071 16 6 16.7643
0.6914 50.0 400 1.3950 0.6032 0.3745 0.5559 0.556 0.8961 0.9059 11.4071 16 6 16.7643

Framework versions

  • Transformers 4.32.1
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.4
  • Tokenizers 0.13.3
Downloads last month
2
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for ldos/text_shortening_model_v5

Base model

google-t5/t5-small
Finetuned
(1384)
this model