SebastianAmayaCeballos's picture
End of training
fc3db2c
|
raw
history blame
1.83 kB
metadata
license: apache-2.0
base_model: t5-small
tags:
  - generated_from_trainer
datasets:
  - tatoeba
metrics:
  - bleu
model-index:
  - name: MLEAFIT_tralate_spanish_portuguese
    results:
      - task:
          name: Sequence-to-sequence Language Modeling
          type: text2text-generation
        dataset:
          name: tatoeba
          type: tatoeba
          config: es-pt
          split: train
          args: es-pt
        metrics:
          - name: Bleu
            type: bleu
            value: 11.2994

MLEAFIT_tralate_spanish_portuguese

This model is a fine-tuned version of t5-small on the tatoeba dataset. It achieves the following results on the evaluation set:

  • Loss: 1.7472
  • Bleu: 11.2994
  • Gen Len: 15.8838

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss Bleu Gen Len
2.6856 1.0 858 1.9674 8.9672 15.7279
2.1422 2.0 1716 1.7900 10.7687 15.8897
2.0298 3.0 2574 1.7472 11.2994 15.8838

Framework versions

  • Transformers 4.33.0
  • Pytorch 2.0.0
  • Datasets 2.1.0
  • Tokenizers 0.13.3