t5-small-ft-tr
This model is a fine-tuned version of google-t5/t5-small on the None dataset. It achieves the following results on the evaluation set:
- Loss: 3.3199
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
4.2085 | 1.0 | 1393 | 3.8282 |
3.9976 | 2.0 | 2786 | 3.7173 |
3.892 | 3.0 | 4179 | 3.6482 |
3.8259 | 4.0 | 5572 | 3.5999 |
3.7951 | 5.0 | 6965 | 3.5604 |
3.7541 | 6.0 | 8358 | 3.5299 |
3.7037 | 7.0 | 9751 | 3.5021 |
3.6603 | 8.0 | 11144 | 3.4786 |
3.6617 | 9.0 | 12537 | 3.4590 |
3.6191 | 10.0 | 13930 | 3.4417 |
3.6184 | 11.0 | 15323 | 3.4273 |
3.592 | 12.0 | 16716 | 3.4128 |
3.5782 | 13.0 | 18109 | 3.4017 |
3.5506 | 14.0 | 19502 | 3.3899 |
3.5519 | 15.0 | 20895 | 3.3798 |
3.5262 | 16.0 | 22288 | 3.3715 |
3.5087 | 17.0 | 23681 | 3.3651 |
3.509 | 18.0 | 25074 | 3.3558 |
3.5004 | 19.0 | 26467 | 3.3503 |
3.4775 | 20.0 | 27860 | 3.3449 |
3.4817 | 21.0 | 29253 | 3.3405 |
3.484 | 22.0 | 30646 | 3.3352 |
3.473 | 23.0 | 32039 | 3.3313 |
3.4625 | 24.0 | 33432 | 3.3292 |
3.4513 | 25.0 | 34825 | 3.3267 |
3.4564 | 26.0 | 36218 | 3.3239 |
3.4507 | 27.0 | 37611 | 3.3221 |
3.4385 | 28.0 | 39004 | 3.3211 |
3.452 | 29.0 | 40397 | 3.3202 |
3.4528 | 30.0 | 41790 | 3.3199 |
Framework versions
- Transformers 4.43.4
- Pytorch 1.13.1
- Datasets 2.12.0
- Tokenizers 0.19.1
- Downloads last month
- 0
Model tree for aparajitha/t5-small-ft-tr
Base model
google-t5/t5-small