Edit model card

results_arat5-2_wiki

This model is a fine-tuned version of UBC-NLP/AraT5v2-base-1024 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 5.6421
  • Rouge1: 0.0905
  • Rouge2: 0.0
  • Rougel: 0.0915
  • Rougelsum: 0.0912
  • Gen Len: 19.0

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
8.3962 0.9506 500 7.0927 0.0 0.0 0.0 0.0 0.0
7.072 1.9011 1000 7.0704 0.0 0.0 0.0 0.0 0.0
7.0441 2.8517 1500 7.0627 0.0 0.0 0.0 0.0 0.0
7.0044 3.8023 2000 7.0205 0.0 0.0 0.0 0.0 16.9719
6.9461 4.7529 2500 6.8398 0.0896 0.0 0.0908 0.0904 17.7903
6.727 5.7034 3000 6.5676 0.0905 0.0 0.0915 0.0912 18.8221
6.446 6.6540 3500 6.3711 0.0905 0.0 0.0915 0.0912 18.8221
6.3054 7.6046 4000 5.9586 0.0905 0.0 0.0915 0.0912 18.8933
5.8985 8.5551 4500 5.7386 0.0905 0.0 0.0915 0.0912 19.0
5.8333 9.5057 5000 5.6421 0.0905 0.0 0.0915 0.0912 19.0

Framework versions

  • Transformers 4.42.0.dev0
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
3
Safetensors
Model size
367M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for hiba2/results_arat5-2_wiki

Finetuned
(11)
this model
Finetunes
1 model