greek-nllb-4ep-512 / README.md
Chaido Porlou
update model card README.md
89631e0
metadata
license: cc-by-nc-4.0
tags:
  - generated_from_trainer
datasets:
  - squad_modified_for_t5_qg_2
model-index:
  - name: greek-nllb-4ep-512
    results: []

greek-nllb-4ep-512

This model is a fine-tuned version of facebook/nllb-200-distilled-600M on the squad_modified_for_t5_qg_2 dataset. It achieves the following results on the evaluation set:

  • Loss: 1.2852

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 16
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 4

Training results

Training Loss Epoch Step Validation Loss
2.2208 0.17 100 1.5566
1.7198 0.34 200 1.4767
1.5435 0.51 300 1.4467
1.5017 0.67 400 1.3955
1.4569 0.84 500 1.3711
1.4496 1.01 600 1.3556
1.3174 1.18 700 1.3488
1.3025 1.35 800 1.3398
1.3013 1.52 900 1.3286
1.2938 1.69 1000 1.3161
1.2984 1.86 1100 1.3057
1.2494 2.03 1200 1.3097
1.1843 2.2 1300 1.2985
1.1868 2.36 1400 1.3021
1.1875 2.53 1500 1.2958
1.1854 2.7 1600 1.2987
1.1818 2.87 1700 1.2885
1.1696 3.04 1800 1.2912
1.1108 3.21 1900 1.2880
1.1301 3.38 2000 1.2889
1.1139 3.55 2100 1.2869
1.1098 3.72 2200 1.2879
1.1073 3.88 2300 1.2852

Framework versions

  • Transformers 4.27.0.dev0
  • Pytorch 1.13.1+cu116
  • Datasets 2.10.1
  • Tokenizers 0.13.2