Add fast tokenizer
Browse filesGenerated with:
```python
from transformers import T5Tokenizer
from transformers.convert_slow_tokenizer import convert_slow_tokenizer
tokenizer = T5Tokenizer('vocabulary/spm.model', legacy=False)
fast_tokenizer = convert_slow_tokenizer(tokenizer)
fast_tokenizer.save("madlad400-3b-mt/tokenizer.json")
```
- tokenizer.json +3 -0
tokenizer.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:15aa9b3cbcf2ec220c6044f898e27fd302067a336df5369ec55a8f385f849d3b
|
3 |
+
size 8236606
|