DictaBERT
Collection
Collection of state-of-the-art language model for Hebrew, finetuned for various tasks, as detailed in the article: https://arxiv.org/abs/2308.16687
•
17 items
•
Updated
•
1
State-of-the-art language model for Hebrew, released here.
This is the fine-tuned model for the prefix segmentation task.
For the bert-base models for other tasks, see here.
Sample usage:
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('dicta-il/dictabert-seg')
model = AutoModel.from_pretrained('dicta-il/dictabert-seg', trust_remote_code=True)
model.eval()
sentence = 'בשנת 1948 השלים אפרים קישון את לימודיו בפיסול מתכת ובתולדות האמנות והחל לפרסם מאמרים הומוריסטיים'
print(model.predict([sentence], tokenizer))
Output:
[
[
[ "[CLS]" ],
[ "ב","שנת" ],
[ "1948" ],
[ "השלים" ],
[ "אפרים" ],
[ "קישון" ],
[ "את" ],
[ "לימודיו" ],
[ "ב","פיסול" ],
[ "מתכת" ],
[ "וב","תולדות" ],
[ "ה","אמנות" ],
[ "ו","החל" ],
[ "לפרסם" ],
[ "מאמרים" ],
[ "הומוריסטיים" ],
[ "[SEP]" ]
]
]
If you use DictaBERT in your research, please cite DictaBERT: A State-of-the-Art BERT Suite for Modern Hebrew
BibTeX:
@misc{shmidman2023dictabert,
title={DictaBERT: A State-of-the-Art BERT Suite for Modern Hebrew},
author={Shaltiel Shmidman and Avi Shmidman and Moshe Koppel},
year={2023},
eprint={2308.16687},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
This work is licensed under a Creative Commons Attribution 4.0 International License.