Create note_these_are_text_encodings.txt
Browse files
vocab/text_encodings/note_these_are_text_encodings.txt
ADDED
@@ -0,0 +1,29 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
These are the items in the vocab.json , sorted into separate categories:
|
2 |
+
|
3 |
+
1) Emojis - As you can see in the vocab.json , emojis are not represented in a comprehensible manner.
|
4 |
+
|
5 |
+
The text in the vocab json is not reversible,
|
6 |
+
|
7 |
+
If tokenization "😅" => ðŁĺħ
|
8 |
+
|
9 |
+
Then you can't prompt "ðŁĺħ" and expect to get the same result as the tokenized original emoji , "😅".
|
10 |
+
|
11 |
+
As such , a list of emojis must be fetched from elsewhere and placed alongside the vocab.json
|
12 |
+
|
13 |
+
2) Token type
|
14 |
+
|
15 |
+
A 'suffix' token is any token with a trailing whitespace </w> , these are the majority.
|
16 |
+
|
17 |
+
The tokens which lack a trailing whitespace </w> are sorted as 'prefix' tokens.
|
18 |
+
|
19 |
+
3) ID rarity
|
20 |
+
|
21 |
+
The higher the ID in the vocab.json , the less that word appeared in the CLIP training data.
|
22 |
+
|
23 |
+
The tokens are sorted into five groups : common (lowest IDs) => average => rare => weird => exotic (highest IDs)
|
24 |
+
|
25 |
+
4) Filtering (certain items are removed)
|
26 |
+
|
27 |
+
Tokens which contain special symbols that mess up the syntax have been removed.
|
28 |
+
|
29 |
+
For the full list of tokens refer to the vocab.json located in the 'raw' folder.
|