pritam3355's picture
Update README.md
f3c61b5
|
raw
history blame
1.68 kB
metadata
license: apache-2.0
base_model: t5-small
tags:
  - generated_from_trainer
datasets:
  - kde4
model-index:
  - name: t5-small-finetuned-en-to-de-accelerate
    results: []
metrics:
  - sacrebleu
pipeline_tag: translation
language:
  - en
  - de

T5-small-finetuned-en-to-de-accelerate translator

This model is a fine-tuned version of t5-small on the kde4 dataset. It achieves the following results on the evaluation set:

  • SacreBELU : 41.46

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Accelerate

Training hyperparameters

The following hyperparameters were used during training:

  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: AdamW with lr=5e-5
  • lr_scheduler_type: linear
  • num_epochs: 3.0

Training results

Training Loss Epoch Validation Loss BLEU score
1.5908938944803344 1.0 1.2350984811782837 39.82
1.3603184403975805 2.0 1.1676584482192993 41.05
1.3098205064204005 3.0 1.1546192169189453 41.46

Graph : https://wandb.ai/tchoud8/t5-finetuned-en-to-fr-accelerate/runs/bnzjma7v/workspace?workspace=user-tchoud8

Framework versions

  • Transformers 4.32.0.dev0
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.4
  • Tokenizers 0.13.3