CANINE-s (CANINE pre-trained with subword loss)
Pretrained CANINE model on 104 languages using a masked language modeling (MLM) objective. It was introduced in the paper CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation and first released in this repository.
What's special about CANINE is that it doesn't require an explicit tokenizer (such as WordPiece or SentencePiece) as other models like BERT and RoBERTa. Instead, it directly operates at a character level: each character is turned into its Unicode code point.
This means that input processing is trivial and can typically be accomplished as:
input_ids = [ord(char) for char in text]
The ord() function is part of Python, and turns each character into its Unicode code point.
Disclaimer: The team releasing CANINE did not write a model card for this model so this model card has been written by the Hugging Face team.
Model description
CANINE is a transformers model pretrained on a large corpus of multilingual data in a self-supervised fashion, similar to BERT. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives:
- Masked language modeling (MLM): one randomly masks part of the inputs, which the model needs to predict. This model (CANINE-s) is trained with a subword loss, meaning that the model needs to predict the identities of subword tokens, while taking characters as input. By reading characters yet predicting subword tokens, the hard token boundary constraint found in other models such as BERT is turned into a soft inductive bias in CANINE.
- Next sentence prediction (NSP): the model concatenates two sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not.
This way, the model learns an inner representation of multiple languages that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the CANINE model as inputs.
Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at models like GPT2.
How to use
Here is how to use this model:
from transformers import CanineTokenizer, CanineModel
model = CanineModel.from_pretrained('google/canine-s')
tokenizer = CanineTokenizer.from_pretrained('google/canine-s')
inputs = ["Life is like a box of chocolates.", "You never know what you gonna get."]
encoding = tokenizer(inputs, padding="longest", truncation=True, return_tensors="pt")
outputs = model(**encoding) # forward pass
pooled_output = outputs.pooler_output
sequence_output = outputs.last_hidden_state
Training data
The CANINE model was pretrained on on the multilingual Wikipedia data of mBERT, which includes 104 languages.
BibTeX entry and citation info
@article{DBLP:journals/corr/abs-2103-06874,
author = {Jonathan H. Clark and
Dan Garrette and
Iulia Turc and
John Wieting},
title = {{CANINE:} Pre-training an Efficient Tokenization-Free Encoder for
Language Representation},
journal = {CoRR},
volume = {abs/2103.06874},
year = {2021},
url = {https://arxiv.org/abs/2103.06874},
archivePrefix = {arXiv},
eprint = {2103.06874},
timestamp = {Tue, 16 Mar 2021 11:26:59 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2103-06874.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
- Downloads last month
- 6,800