text-to-image-prompts / vocab /text_encodings /note_these_are_text_encodings.txt
codeShare's picture
Update vocab/text_encodings/note_these_are_text_encodings.txt
45a2684 verified
These are the items in the vocab.json , sorted into separate categories:
1) Emojis - As you can see in the vocab.json , emojis are not represented in a comprehensible manner.
The text in the vocab json is not reversible,
If tokenization "😅" => ðŁĺħ
Then you can't prompt "ðŁĺħ" and expect to get the same result as the tokenized original emoji , "😅".
As such , a list of emojis must be fetched from elsewhere and placed alongside the vocab.json
2) Token type
A 'suffix' token is any token with a trailing whitespace </w> , these are the majority.
The tokens which lack a trailing whitespace </w> are sorted as 'prefix' tokens.
3) ID rarity
The higher the ID in the vocab.json , the less that word appeared in the CLIP training data.
The tokens are sorted into five groups : common (lowest IDs) => average => rare => weird => exotic (highest IDs)
4) Filtering (certain items are removed)
Tokens which contain special symbols that mess up the syntax have been removed.
For the full list of tokens refer to the vocab.json located in the 'raw' folder visible in the 'vocab folder'.