Mukayese: Turkish NLP Strikes Back

Summarization: mukayese/mbart-large-turkish-sum

This model is a fine-tuned version of google/mt5-base on the mlsum/tu dataset.

It achieves the following results on the evaluation set:

  • Rouge1: 47.4222
  • Rouge2: 34.8624
  • Rougel: 42.2487
  • Rougelsum: 43.9494

Check this paper for more details on the model and the dataset.

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0005
  • train_batch_size: 2
  • eval_batch_size: 4
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 8
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 64
  • total_eval_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10.0
  • label_smoothing_factor: 0.1

Framework versions

  • Transformers 4.11.3
  • Pytorch 1.8.2+cu111
  • Datasets 1.14.0
  • Tokenizers 0.10.3

Citation

@misc{safaya-etal-2022-mukayese,
    title={Mukayese: Turkish NLP Strikes Back},
    author={Ali Safaya and Emirhan Kurtuluş and Arda Göktoğan and Deniz Yuret},
    year={2022},
    eprint={2203.01215},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
78
Safetensors
Model size
582M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for mukayese/mt5-base-turkish-summarization

Base model

google/mt5-base
Finetuned
(161)
this model

Dataset used to train mukayese/mt5-base-turkish-summarization

Evaluation results