mbart_cycle0_ko-en / README.md
yesj1234's picture
Update README.md
207b35f
|
raw
history blame
1.82 kB
metadata
language:
  - ko
  - en
base_model: ./reduced_model
tags:
  - generated_from_trainer
metrics:
  - bleu
model-index:
  - name: mbart_cycle0_ko-en
    results: []

mbart_cycle0_ko-en

This model is a fine-tuned version of reduced mbart-large-cc25(https://huggingface.co/facebook/mbart-large-cc25) on an custom dataset. It achieves the following results on the evaluation set:

  • Loss: 8.0362
  • Bleu: 3.9193
  • Gen Len: 19.5758

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 4
  • total_train_batch_size: 16
  • total_eval_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 300
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Bleu Gen Len
8.5105 10.0 500 5.7366 1.0483 32.2222
1.3079 20.0 1000 7.1497 3.8281 17.3838
0.179 30.0 1500 7.7171 4.1437 18.6869
0.0535 40.0 2000 7.9881 4.1251 18.5455
0.0203 50.0 2500 8.0362 3.9193 19.5758

Framework versions

  • Transformers 4.33.1
  • Pytorch 2.0.1+cu117
  • Datasets 2.14.5
  • Tokenizers 0.13.3