lipreading-sentencepiece / tokenizer_config.json
snoop2head's picture
Upload tokenizer
913265d
raw
history blame contribute delete
193 Bytes
{
"model_max_length": 512,
"special_tokens": [
"<s>",
"<pad>",
"</s>",
"<unk>",
"<cls>",
"<sep>",
"<mask>"
],
"tokenizer_class": "PreTrainedTokenizerFast"
}