File size: 1,698 Bytes
58c0195
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5d7c5a0
 
 
 
 
 
 
 
 
58c0195
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
---
license: apache-2.0
language:
- ar
tags:
- aranizer
- arabic tokenizer
- SP
---

# Aranizer | Arabic Tokenizer

**Aranizer** is an Arabic SentencePiece-based tokenizer designed for efficient and versatile tokenization. 

## Features

- **Tokenizer Name**: Aranizer
- **Type**: SentencePiece tokenizer
- **Vocabulary Size**: 86,000
- **Total Number of Tokens**: 1,233,628
- **Fertility Score**: 1.589
- It supports Arabic Diacritization

## Aranizer Collection Achieved State of the Art Arabic Tokenizer

The Aranizer tokenizer has achieved state-of-the-art results on the [Arabic Tokenizers Leaderboard](https://huggingface.co/spaces/MohamedRashad/arabic-tokenizers-leaderboard) on Hugging Face. Below is a screenshot highlighting this achievement:

<img src="./lb.png" alt="Screenshot showing the Aranizer Tokenizer achieving state of the art" width="800">



## How to Use the Aranizer Tokenizer

The Aranizer tokenizer can be easily loaded using the `transformers` library from Hugging Face. Below is an example of how to load and use the tokenizer in your Python project:

```python
from transformers import AutoTokenizer

# Load the Aranizer tokenizer
tokenizer = AutoTokenizer.from_pretrained("riotu-lab/Aranizer-SP-86k")

# Example usage
text = "اكتب النص العربي"
tokens = tokenizer.tokenize(text)
token_ids = tokenizer.convert_tokens_to_ids(tokens)

print("Tokens:", tokens)
print("Token IDs:", token_ids)
```

```markdown
## Citation

@article{koubaa2024arabiangpt,
  title={ArabianGPT: Native Arabic GPT-based Large Language Model},
  author={Koubaa, Anis and Ammar, Adel and Ghouti, Lahouari and Necar, Omer and Sibaee, Serry},
  year={2024},
  publisher={Preprints}
}