File size: 8,390 Bytes
1258d01
 
 
 
 
9a2d4e4
 
 
 
 
 
99b8598
1258d01
f39e96c
3a9e63c
 
083d9e9
 
a46ecec
 
 
 
 
 
99b8598
a46ecec
99b8598
a46ecec
083d9e9
 
 
 
f39e96c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4e1eab7
 
 
 
 
 
 
f39e96c
 
1cc14ee
 
 
 
f84cc0c
5422f6c
90d1f94
 
 
 
 
 
 
1cc14ee
7194b81
 
 
 
 
 
 
1cc14ee
 
 
 
 
13109f6
 
 
 
083d9e9
3a9e63c
 
99b8598
 
 
 
 
 
3a9e63c
 
7f7c720
 
fc9d0b7
 
7f7c720
 
fc9d0b7
7f7c720
 
fc9d0b7
7f7c720
fc9d0b7
 
 
7f7c720
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
# ArabicT5: Efficient Adaptation of T5 on Arabic Language


# Model Description

This model adapts T5 on the Arabic Language by pre-training T5 on : 
- Arabic Wikipedia.
- Marefa encyclopedia.
- Hindawi Books.
- a collection of Arabic News.

Total Corpora size is 17GB. This model uses an efficient implementation of T5 which reduces the fine-tuning and memory used [Link](https://arxiv.org/abs/2109.10686) and uses T5x for pre-training [Link](https://github.com/google-research/t5x)


## Pre-training Settings and Results on TyDi QA Development Dataset ( Model in this card is highlighted in bold )

|     Model        | Hidden Layer | Atten. head | Atten. Layers | Vocab | Hardware  |Training Steps | Batch  |  Train x Batch Factor |Corpora                 |
|------------------|--------------|-------------|---------------|-------|-----------|---------------|--------|-----------------------|------------------------|
| AraT5-base       |     768      |      12     |      12       |  110K |TPUv3-8    |        1M     |  128   | 1.0x                  |248GB 29B tokens (MSA + Tweets)    |
| AraT5-msa-base   |     768      |      12     |      12       |  110K |TPUv3-8    |        1M     |  128   | 1.0x                  |70GB (MSA)              |
| AraT5-tweets-base|     768      |      12     |      12       |  110K |TPUv3-8    |        1M     |  128   | 1.0x                  |178GB (Tweets)          |
| AraBART-base     |     768      |      12     |      12       |  50K | 128 V100 GPUs (60h)    |25 epochs|  -     | -                     |73GB (MSA)          |
| mT5-base         |     768      |      12     |      12       |  250K |TPUv3-32   |        1M     |  1024  | 8.0x                  |6.3T tokens (mC4)|
| ArabicT5-17GB-small   |     512      |      8     |      20      |  32K  |TPUv3-32   |       256K    |  256   | 0.5x                 |17GB (MSA)          |
| ArabicT5-49GB-small   |     512      |      8     |      16      |  32K  |TPUv3-64   |       500K    |  256   | 1.0x                 |49GB (MSA + OSCAR)          |
| ArabicT5-17GB-base    |     768      |      12     |      16       |  32K  |TPUv3-128  |       500K    |  512   | 2.0x                  |17GB (MSA)          |
| ArabicT5-49GB-base    |     768      |      12     |      16       |  32K  |TPUv3-64  |       500K    |  256   | 1.0x                  |49GB (MSA + OSCAR)          |
| ArabicT5-17GB-large  |     768      |      12     |      36       |  32K  |TPUv3-128  |       500K    |  512   | 2.0x                  |17GB (MSA)          |


##  Results on TyDi QA, HARD, Sentiment Analysis, Sarcasm Detection ( Best Score is highlighted in bold )

|  Model Type  |         Model          | <center>TyDi QA| <center>HARD| <center>ArSarcasm-v2-Sentiment| <center>ArSarcasm-v2-Sarcasm| XL-SUM |
|--------------|------------------------|---------------------|----------------|-----------------|------------|------------|
|  Generative  |       AraT5-base       |  <center>70.4/84.2  |<center>96.5|<center>69.7/72.6|<center>60.4|<center>30.3|
|  Generative  |     AraT5-msa-base     |  <center>70.9/84.0  |<center>96.5|<center>70.0/72.7|<center>60.7|<center>27.4|
|  Generative  |   AraT5-tweets-base    |  <center>65.1/79.0  |<center>96.3|<center>70.7/73.5|<center>61.1|<center>25.1|
|  Generative  |       mT5-base         |  <center>72.2/84.1  |<center>96.2|<center>67.3/68.8|<center>52.2|<center>25.7|
|  Generative  |    AraBART-base        |  <center>48.8/71.2  |<center>96.1|<center>66.2/68.2|<center>56.3|<center>31.2|
|  Generative  |   ArabicT5-17GB-small  |  <center>70.8/84.8  |<center>96.4|<center>68.9/71.2|<center>58.9|<center>29.2|
|  Generative  |   ArabicT5-49GB-small  |  <center>72.4/85.1  |<center>96.4|<center>70.2/73.4|<center>61.0|<center>30.2|
|  Generative  |   ArabicT5-17GB-base   |  <center>73.3/86.1  |<center>96.4|<center>70.4/73.0|<center>59.8|<center>30.3|
|  Generative  |   ArabicT5-49GB-base   |  <center>72.1/85.1  |<center>96.5|<center>71.3/74.1|<center>60.4|<center>30.9|
|  Generative  |   ArabicT5-17GB-large  |  <center>75.5/87.1  |<center>96.5| <center>72.2/75.2|<center>61.7|<center>31.7|
|  Exctractive |    AraBERTv02-Large    |  <center>73.7/86.0  |<center>96.4|<center>69.5/71.8|<center>-|<center> N/A|
|  Exctractive |    AraBERTv2-Large    |  <center>64.5/82.2 |<center>96.5|<center>70.0/72.4|<center>-|<center> N/A|
|  Exctractive |    AraELECTRA-base     |  <center>74.9/86.7  |<center>96.4|<center>69.6/72.3|<center>-|<center>N/A|
|  Exctractive | ArabicTransformer-base |  <center>75.4/87.2  |<center>96.6|<center>70.8/74.0|<center>-|<center> N/A|

Evaluation Metrics: TyDi QA (EM/F1), HARD (Accuracy), Sentiment Analysis (Accuracy / F1-PN positive-negative), Sarcasm Detection (F1-sarcastic), XL-SUM (Rouge-L with Stemmer).

You can download the full details of our grid search for all models in all tasks above from this link: https://github.com/salrowili/ArabicT5/raw/main/ArabicT5_Grid_Search.zip

For the XL-Sum task, we choose our best run for each model using the eval set. We use the official evaluation script from XL-Sum, which uses the stemmer function, which may show better results than papers that don't use the stemmer function. The official XL-Sum paper uses a stemmer function.

Reported numbers for extractive models is taken from ArabicTransformer paper --> https://aclanthology.org/2021.findings-emnlp.108/

# FineTuning our efficient ArabicT5-49GB-Small model with Torch on 3070 laptop GPU ###

[![Open In Colab][COLAB]](https://colab.research.google.com/github/salrowili/ArabicT5/blob/main/ArabicT5_49GB_Small_on_3070_Laptop_GPU.ipynb)

If you are running your code on a laptop GPU (e.g., a gaming laptop) or limited GPU memory, we recommended using our ArabicT5-49GB-Small model, which was the only model from the list that we were able to run on 3070 Laptop card with a batch size of 8. We manage to achieve an F1 score of 85.391 (slightly better than our FLAX code ) on the TyDi QA task. 


# FineTuning our ArabicT5 model on generative and abstractive tasks with FLAX ###

[![Open In Colab][COLAB]](https://colab.research.google.com/github/salrowili/ArabicT5/blob/main/FineTuning_ArabicT5_with_FLAX_and_TPU.ipynb)

[COLAB]: https://colab.research.google.com/assets/colab-badge.svg


# FineTuning ArabicT5 on TPUv3-8 with free Kaggle ###


https://www.kaggle.com/code/sultanalrowili/arabict5-on-tydi-with-free-tpuv3-8-with-kaggle



# Continual Pre-Training of ArabicT5 with T5x
if you want to continue pre-training ArabicT5 on your own data, we have uploaded the raw t5x checkpoint to this link https://huggingface.co/sultan/ArabicT5-49GB-base/blob/main/arabict5_49GB_base_t5x.tar.gz
We will soon share a tutorial on how you can do that for free with Kaggle TPU


## GitHub Page

https://github.com/salrowili/ArabicT5


# Acknowledgment

We want to acknowledge the support we have from The TPU Research Cloud (TRC) team to grant us access to TPUv3 units.


# Paper

[Generative Approach for Gender-Rewriting Task with ArabicT5](https://aclanthology.org/2022.wanlp-1.55/)

# Citation

```bibtex
@inproceedings{alrowili-shanker-2022-generative,
    title = "Generative Approach for Gender-Rewriting Task with {A}rabic{T}5",
    author = "Alrowili, Sultan  and
      Shanker, Vijay",
    booktitle = "Proceedings of the The Seventh Arabic Natural Language Processing Workshop (WANLP)",
    month = dec,
    year = "2022",
    address = "Abu Dhabi, United Arab Emirates (Hybrid)",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2022.wanlp-1.55",
    pages = "491--495",
    abstract = "Addressing the correct gender in generative tasks (e.g., Machine Translation) has been an overlooked issue in the Arabic NLP. However, the recent introduction of the Arabic Parallel Gender Corpus (APGC) dataset has established new baselines for the Arabic Gender Rewriting task. To address the Gender Rewriting task, we first pre-train our new Seq2Seq ArabicT5 model on a 17GB of Arabic Corpora. Then, we continue pre-training our ArabicT5 model on the APGC dataset using a newly proposed method. Our evaluation shows that our ArabicT5 model, when trained on the APGC dataset, achieved competitive results against existing state-of-the-art methods. In addition, our ArabicT5 model shows better results on the APGC dataset compared to other Arabic and multilingual T5 models.",
}
```