File size: 46,675 Bytes
44eca84 1028385 16db48a 1028385 44eca84 70d7fbb 44eca84 f22aece 1028385 cd1aec4 1028385 19fa3ec a78db43 1028385 a78db43 1028385 a407d29 1028385 19fa3ec a78db43 19fa3ec abc0b46 551a5a7 abc0b46 a78db43 44eca84 38a8e51 606fac8 865af29 44eca84 6124561 606fac8 44eca84 16db48a 233ede2 16db48a 540a0c2 6124561 cd1aec4 540a0c2 16db48a 540a0c2 1c2d1a2 540a0c2 233ede2 16db48a 233ede2 16db48a 233ede2 a78db43 233ede2 16db48a a78db43 16db48a a78db43 233ede2 a78db43 540a0c2 16db48a 233ede2 16db48a 540a0c2 16db48a a78db43 16db48a a78db43 16db48a a78db43 16db48a 233ede2 cd1aec4 16db48a cd1aec4 16db48a 04ba1c9 f378257 04ba1c9 f378257 04ba1c9 f378257 04ba1c9 f378257 cd1aec4 8a0aaf3 f378257 8a0aaf3 7959015 cd1aec4 7959015 6eeabcf 7959015 6eeabcf a48155a cd1aec4 a48155a 7959015 a48155a 7959015 70d7fbb 8a0aaf3 70d7fbb 6eeabcf a48155a 70d7fbb 8a0aaf3 a48155a 70d7fbb 7959015 a48155a 70d7fbb a48155a 70d7fbb bfc742a 70d7fbb a48155a bfc742a 6eeabcf 7959015 865af29 606fac8 0f5ef92 606fac8 0f5ef92 f378257 a48155a 70d7fbb 6eeabcf 6124561 bfc742a 6eeabcf 606fac8 6eeabcf 7959015 6eeabcf 7959015 6eeabcf bfc742a 6eeabcf 606fac8 bfc742a 6eeabcf 0f5ef92 7959015 6eeabcf 606fac8 865af29 0f5ef92 6eeabcf 70d7fbb 6eeabcf 70d7fbb 6eeabcf 70d7fbb 6eeabcf 70d7fbb 6eeabcf 70d7fbb f378257 6eeabcf 865af29 6eeabcf 6124561 7959015 6eeabcf 7959015 6eeabcf a48155a 6eeabcf bfc742a 6eeabcf 0f5ef92 6eeabcf a48155a 6124561 6eeabcf bfc742a 6eeabcf bfc742a 6eeabcf a48155a 6eeabcf 606fac8 6124561 606fac8 6eeabcf 606fac8 6eeabcf 6124561 6eeabcf 6124561 0f5ef92 6124561 606fac8 6eeabcf 606fac8 6eeabcf 73c73f4 7959015 70d7fbb 865af29 cd1aec4 70d7fbb 8a0aaf3 70d7fbb f378257 233ede2 f378257 233ede2 f378257 7959015 f378257 2b7496e 16db48a 2b7496e 16db48a 2b7496e 16db48a 2b7496e 16db48a 2b7496e 16db48a 2b7496e 16db48a 2b7496e 16db48a 2b7496e 16db48a 2b7496e 19fa3ec 16db48a 19fa3ec 70d7fbb 2b7496e 44eca84 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 |
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"provenance": []
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python"
}
},
"cells": [
{
"cell_type": "markdown",
"source": [
"This Notebook is a Stable-diffusion tool which allows you to find similiar tokens from the SD 1.5 vocab.json that you can use for text-to-image generation. Try this Free online SD 1.5 generator with the results: https://perchance.org/fusion-ai-image-generator\n",
"\n",
"Scroll to the bottom of the notebook to see the guide for how this works."
],
"metadata": {
"id": "L7JTcbOdBPfh"
}
},
{
"cell_type": "code",
"source": [
"# @title ✳️ Load/initialize values\n",
"# Load the tokens into the colab\n",
"!git clone https://huggingface.co/datasets/codeShare/sd_tokens\n",
"import torch\n",
"from torch import linalg as LA\n",
"device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\n",
"%cd /content/sd_tokens\n",
"token = torch.load('sd15_tensors.pt', map_location=device, weights_only=True)\n",
"#-----#\n",
"VOCAB_FILENAME = 'tokens_most_similiar_to_girl'\n",
"ACTIVE_IMG = ''\n",
"#-----#\n",
"\n",
"#Import the vocab.json\n",
"import json\n",
"import pandas as pd\n",
"with open('vocab.json', 'r') as f:\n",
" data = json.load(f)\n",
"\n",
"_df = pd.DataFrame({'count': data})['count']\n",
"\n",
"vocab = {\n",
" value: key for key, value in _df.items()\n",
"}\n",
"#-----#\n",
"\n",
"# Define functions/constants\n",
"NUM_TOKENS = 49407\n",
"\n",
"def absolute_value(x):\n",
" return max(x, -x)\n",
"\n",
"\n",
"def token_similarity(A, B):\n",
"\n",
" #Vector length#\n",
" _A = LA.vector_norm(A, ord=2)\n",
" _B = LA.vector_norm(B, ord=2)\n",
"\n",
" #----#\n",
" result = torch.dot(A,B)/(_A*_B)\n",
" #similarity_pcnt = absolute_value(result.item()*100)\n",
" similarity_pcnt = result.item()*100\n",
" similarity_pcnt_aprox = round(similarity_pcnt, 3)\n",
" result = f'{similarity_pcnt_aprox} %'\n",
" return result\n",
"\n",
"\n",
"def similarity(id_A , id_B):\n",
" #Tensors\n",
" A = token[id_A]\n",
" B = token[id_B]\n",
" return token_similarity(A, B)\n",
"#----#\n",
"\n",
"#print(vocab[8922]) #the vocab item for ID 8922\n",
"#print(token[8922].shape) #dimension of the token\n",
"\n",
"mix_with = \"\"\n",
"mix_method = \"None\"\n",
"\n",
"#-------------#\n",
"# UNUSED\n",
"\n",
"# Get the 10 lowest values from a tensor as a string\n",
"def get_valleys (A):\n",
" sorted, indices = torch.sort(A,dim=0 , descending=False)\n",
" result = \"{\"\n",
" for index in range(10):\n",
" id = indices[index].item()\n",
" result = result + f\"{id}\"\n",
" if(index<9):\n",
" result = result + \",\"\n",
" result = result + \"}\"\n",
" return result\n",
"\n",
"# Get the 10 highest values from a tensor as a string\n",
"def get_peaks (A):\n",
" sorted, indices = torch.sort(A,dim=0 , descending=True)\n",
" result = \"{\"\n",
" for index in range(10):\n",
" id = indices[index].item()\n",
" result = result + f\"{id}\"\n",
" if(index<9):\n",
" result = result + \",\"\n",
" result = result + \"}\"\n",
" return result"
],
"metadata": {
"id": "Ch9puvwKH1s3",
"collapsed": true,
"outputId": "033c251a-2043-40e7-9500-4da870ffa7fd",
"colab": {
"base_uri": "https://localhost:8080/"
},
"cellView": "form"
},
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"Cloning into 'sd_tokens'...\n",
"remote: Enumerating objects: 20, done.\u001b[K\n",
"remote: Counting objects: 100% (17/17), done.\u001b[K\n",
"remote: Compressing objects: 100% (17/17), done.\u001b[K\n",
"remote: Total 20 (delta 4), reused 0 (delta 0), pack-reused 3 (from 1)\u001b[K\n",
"Unpacking objects: 100% (20/20), 310.37 KiB | 2.10 MiB/s, done.\n",
"Filtering content: 100% (3/3), 160.82 MiB | 26.64 MiB/s, done.\n",
"/content/sd_tokens\n"
]
}
]
},
{
"cell_type": "code",
"source": [
"# @title ⚡ Get similiar tokens\n",
"import torch\n",
"from transformers import AutoTokenizer\n",
"tokenizer = AutoTokenizer.from_pretrained(\"openai/clip-vit-large-patch14\", clean_up_tokenization_spaces = False)\n",
"\n",
"# @markdown Write name of token to match against\n",
"token_name = \" banana\" # @param {type:'string',\"placeholder\":\"leave empty for random value token\"}\n",
"\n",
"prompt = token_name\n",
"# @markdown (optional) Mix the token with something else\n",
"mix_with = \"\" # @param {\"type\":\"string\",\"placeholder\":\"leave empty for random value token\"}\n",
"mix_method = \"None\" # @param [\"None\" , \"Average\", \"Subtract\"] {allow-input: true}\n",
"w = 0.5 # @param {type:\"slider\", min:0, max:1, step:0.01}\n",
"# @markdown Limit char size of included token\n",
"\n",
"min_char_size = 0 # param {type:\"slider\", min:0, max: 50, step:1}\n",
"char_range = 50 # param {type:\"slider\", min:0, max: 50, step:1}\n",
"\n",
"tokenizer_output = tokenizer(text = prompt)\n",
"input_ids = tokenizer_output['input_ids']\n",
"id_A = input_ids[1]\n",
"A = torch.tensor(token[id_A])\n",
"A = A/A.norm(p=2, dim=-1, keepdim=True)\n",
"#-----#\n",
"tokenizer_output = tokenizer(text = mix_with)\n",
"input_ids = tokenizer_output['input_ids']\n",
"id_C = input_ids[1]\n",
"C = torch.tensor(token[id_C])\n",
"C = C/C.norm(p=2, dim=-1, keepdim=True)\n",
"#-----#\n",
"sim_AC = torch.dot(A,C)\n",
"#-----#\n",
"print(input_ids)\n",
"#-----#\n",
"\n",
"#if no imput exists we just randomize the entire thing\n",
"if (prompt == \"\"):\n",
" id_A = -1\n",
" print(\"Tokenized prompt tensor A is a random valued tensor with no ID\")\n",
" R = torch.rand(A.shape)\n",
" R = R/R.norm(p=2, dim=-1, keepdim=True)\n",
" A = R\n",
" name_A = 'random_A'\n",
"\n",
"#if no imput exists we just randomize the entire thing\n",
"if (mix_with == \"\"):\n",
" id_C = -1\n",
" print(\"Tokenized prompt 'mix_with' tensor C is a random valued tensor with no ID\")\n",
" R = torch.rand(A.shape)\n",
" R = R/R.norm(p=2, dim=-1, keepdim=True)\n",
" C = R\n",
" name_C = 'random_C'\n",
"\n",
"name_A = \"A of random type\"\n",
"if (id_A>-1):\n",
" name_A = vocab[id_A]\n",
"\n",
"name_C = \"token C of random type\"\n",
"if (id_C>-1):\n",
" name_C = vocab[id_C]\n",
"\n",
"print(f\"The similarity between A '{name_A}' and C '{name_C}' is {round(sim_AC.item()*100,2)} %\")\n",
"\n",
"if (mix_method == \"None\"):\n",
" print(\"No operation\")\n",
"\n",
"if (mix_method == \"Average\"):\n",
" A = w*A + (1-w)*C\n",
" _A = LA.vector_norm(A, ord=2)\n",
" print(f\"Tokenized prompt tensor A '{name_A}' token has been recalculated as A = w*A + (1-w)*C , where C is '{name_C}' token , for w = {w} \")\n",
"\n",
"if (mix_method == \"Subtract\"):\n",
" tmp = w*A - (1-w)*C\n",
" tmp = tmp/tmp.norm(p=2, dim=-1, keepdim=True)\n",
" A = tmp\n",
" #//---//\n",
" print(f\"Tokenized prompt tensor A '{name_A}' token has been recalculated as A = _A*norm(w*A - (1-w)*C) , where C is '{name_C}' token , for w = {w} \")\n",
"\n",
"#OPTIONAL : Add/subtract + normalize above result with another token. Leave field empty to get a random value tensor\n",
"\n",
"dots = torch.zeros(NUM_TOKENS)\n",
"for index in range(NUM_TOKENS):\n",
" id_B = index\n",
" B = torch.tensor(token[id_B])\n",
" B = B/B.norm(p=2, dim=-1, keepdim=True)\n",
" sim_AB = torch.dot(A,B)\n",
" dots[index] = sim_AB\n",
"\n",
"\n",
"sorted, indices = torch.sort(dots,dim=0 , descending=True)\n",
"#----#\n",
"if (mix_method == \"Average\"):\n",
" print(f'Calculated all cosine-similarities between the average of token {name_A} and {name_C} with Id_A = {id_A} and mixed Id_C = {id_C} as a 1x{sorted.shape[0]} tensor')\n",
"if (mix_method == \"Subtract\"):\n",
" print(f'Calculated all cosine-similarities between the subtract of token {name_A} and {name_C} with Id_A = {id_A} and mixed Id_C = {id_C} as a 1x{sorted.shape[0]} tensor')\n",
"if (mix_method == \"None\"):\n",
" print(f'Calculated all cosine-similarities between the token {name_A} with Id_A = {id_A} with the the rest of the {NUM_TOKENS} tokens as a 1x{sorted.shape[0]} tensor')\n",
"\n",
"#Produce a list id IDs that are most similiar to the prompt ID at positiion 1 based on above result\n",
"\n",
"# @markdown Set print options\n",
"list_size = 100 # @param {type:'number'}\n",
"print_ID = False # @param {type:\"boolean\"}\n",
"print_Similarity = True # @param {type:\"boolean\"}\n",
"print_Name = True # @param {type:\"boolean\"}\n",
"print_Divider = True # @param {type:\"boolean\"}\n",
"\n",
"\n",
"if (print_Divider):\n",
" print('//---//')\n",
"\n",
"print('')\n",
"print('Here is the result : ')\n",
"print('')\n",
"\n",
"for index in range(list_size):\n",
" id = indices[index].item()\n",
" if (print_Name):\n",
" print(f'{vocab[id]}') # vocab item\n",
" if (print_ID):\n",
" print(f'ID = {id}') # IDs\n",
" if (print_Similarity):\n",
" print(f'similiarity = {round(sorted[index].item()*100,2)} %')\n",
" if (print_Divider):\n",
" print('--------')\n",
"\n",
"#Print the sorted list from above result\n",
"\n",
"#The prompt will be enclosed with the <|start-of-text|> and <|end-of-text|> tokens, which is why output will be [49406, ... , 49407].\n",
"\n",
"#You can leave the 'prompt' field empty to get a random value tensor. Since the tensor is random value, it will not correspond to any tensor in the vocab.json list , and this it will have no ID.\n",
"\n",
"# Save results as .db file\n",
"import shelve\n",
"VOCAB_FILENAME = 'tokens_most_similiar_to_' + name_A.replace('</w>','').strip()\n",
"d = shelve.open(VOCAB_FILENAME)\n",
"#NUM TOKENS == 49407\n",
"for index in range(NUM_TOKENS):\n",
" #print(d[f'{index}']) #<-----Use this to read values from the .db file\n",
" d[f'{index}']= vocab[indices[index].item()] #<---- write values to .db file\n",
"#----#\n",
"d.close() #close the file\n",
"# See this link for additional stuff to do with shelve: https://docs.python.org/3/library/shelve.html"
],
"metadata": {
"id": "iWeFnT1gAx6A",
"cellView": "form"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"Below image interrogator appends CLIP tokens to either end of the 'must_contain' text , and seeks to maximize similarity with the image encoding.\n",
"\n",
"It takes a long while to check all the tokens (too long!) so this cell only samples a range of the 49K available tokens.\n",
"\n",
"You can run this cell, then paste the result into the 'must_contain' box , and then run the cell again.\n",
"\n",
"Check the sd_tokens folder for stored .db files from running the '⚡ Get similiar tokens' cell. These can be used in the ⚡+🖼️ -> 📝 Token-Sampling Image interrogator cell\n"
],
"metadata": {
"id": "IUCuV9RtQpBn"
}
},
{
"cell_type": "code",
"source": [
"# @title ⚡+🖼️ -> 📝 Token-Sampling Image interrogator\n",
"#-----#\n",
"NUM_TOKENS = 49407\n",
"import shelve\n",
"db_vocab = shelve.open(VOCAB_FILENAME)\n",
"print(f'using the tokens found in {VOCAB_FILENAME}.db as the vocab')\n",
"# @markdown # What do you want to to mimic?\n",
"use = '🖼️image_encoding from image' # @param ['📝text_encoding from prompt', '🖼️image_encoding from image']\n",
"# @markdown --------------------------\n",
"use_token_padding = True # param {type:\"boolean\"} <---- Enabled by default\n",
"prompt = \"photo of a banana\" # @param {\"type\":\"string\",\"placeholder\":\"Write a prompt\"}\n",
"#-----#\n",
"prompt_A = prompt\n",
"#-----#\n",
"from google.colab import files\n",
"def upload_files():\n",
" from google.colab import files\n",
" uploaded = files.upload()\n",
" for k, v in uploaded.items():\n",
" open(k, 'wb').write(v)\n",
" return list(uploaded.keys())\n",
"#Get image\n",
"# You can use \"http://images.cocodataset.org/val2017/000000039769.jpg\" for testing\n",
"image_url = \"http://images.cocodataset.org/val2017/000000039769.jpg\" # @param {\"type\":\"string\",\"placeholder\":\"leave empty for local upload (scroll down to see it)\"}\n",
"colab_image_path = \"\" # @param {\"type\":\"string\",\"placeholder\": \"eval. as '/content/sd_tokens/' + **your input**\"}\n",
"# @markdown --------------------------\n",
"from PIL import Image\n",
"import requests\n",
"image_A = \"\"\n",
"#----#\n",
"if(use == '🖼️image_encoding from image'):\n",
" if image_url == \"\":\n",
" import cv2\n",
" from google.colab.patches import cv2_imshow\n",
" # Open the image.\n",
" if colab_image_path == \"\":\n",
" keys = upload_files()\n",
" for key in keys:\n",
" image_A = cv2.imread(\"/content/sd_tokens/\" + key)\n",
" colab_image_path = \"/content/sd_tokens/\" + key\n",
" else:\n",
" image_A = cv2.imread(\"/content/sd_tokens/\" + colab_image_path)\n",
" else:\n",
" image_A = Image.open(requests.get(image_url, stream=True).raw)\n",
"#------#\n",
"from transformers import AutoTokenizer\n",
"tokenizer = AutoTokenizer.from_pretrained(\"openai/clip-vit-large-patch14\", clean_up_tokenization_spaces = False)\n",
"from transformers import CLIPProcessor, CLIPModel\n",
"processor = CLIPProcessor.from_pretrained(\"openai/clip-vit-large-patch14\" , clean_up_tokenization_spaces = True)\n",
"model = CLIPModel.from_pretrained(\"openai/clip-vit-large-patch14\")\n",
"#-----#\n",
"if(use == '🖼️image_encoding from image'):\n",
" # Get image features\n",
" inputs = processor(images=image_A, return_tensors=\"pt\")\n",
" image_features = model.get_image_features(**inputs)\n",
" image_features = image_features / image_features.norm(p=2, dim=-1, keepdim=True)\n",
" name_A = \"the image\"\n",
"#-----#\n",
"if(use == '📝text_encoding from prompt'):\n",
" # Get text features\n",
" inputs = tokenizer(text = prompt, padding=True, return_tensors=\"pt\")\n",
" text_features_A = model.get_text_features(**inputs)\n",
" name_A = prompt\n",
"#-----#\n",
"# @markdown # The output...\n",
"must_start_with = \"\" # @param {\"type\":\"string\",\"placeholder\":\"write a text\"}\n",
"must_contain = \"\" # @param {\"type\":\"string\",\"placeholder\":\"write a text\"}\n",
"must_end_with = \"\" # @param {\"type\":\"string\",\"placeholder\":\"write a text\"}\n",
"# @markdown -----\n",
"# @markdown # Use a range of tokens from the vocab.json (slow method)\n",
"start_search_at_index = 0 # @param {type:\"slider\", min:0, max: 49407, step:100}\n",
"# @markdown The lower the start_index, the more similiar the sampled tokens will be to the target token assigned in the '⚡ Get similiar tokens' cell\". If the cell was not run, then it will use tokens ordered by similarity to the \"girl\\</w>\" token\n",
"start_search_at_ID = start_search_at_index\n",
"search_range = 1000 # @param {type:\"slider\", min:100, max:49407, step:100}\n",
"\n",
"samples_per_iter = 10 # @param {type:\"slider\", min:10, max: 100, step:10}\n",
"\n",
"iterations = 5 # @param {type:\"slider\", min:1, max: 20, step:0}\n",
"restrictions = 'None' # @param [\"None\", \"Suffix only\", \"Prefix only\"]\n",
"#markdown Limit char size of included token <----- Disabled\n",
"min_char_size = 0 #param {type:\"slider\", min:0, max: 20, step:1}\n",
"char_range = 50 #param {type:\"slider\", min:0, max: 20, step:1}\n",
"# markdown # ...or paste prompt items\n",
"# markdown Format must be {item1|item2|...}. You can aquire prompt items using the Randomizer in the fusion gen: https://perchance.org/fusion-ai-image-generator\n",
"_enable = False # param {\"type\":\"boolean\"}\n",
"prompt_items = \"\" # param {\"type\":\"string\",\"placeholder\":\"{item1|item2|...}\"}\n",
"#-----#\n",
"#-----#\n",
"START = start_search_at_ID\n",
"RANGE = min(search_range , max(1,NUM_TOKENS - start_search_at_ID))\n",
"#-----#\n",
"import math, random\n",
"NUM_PERMUTATIONS = 6\n",
"ITERS = iterations\n",
"#-----#\n",
"#LOOP START\n",
"#-----#\n",
"# Check if original solution is best\n",
"best_sim = 0\n",
"name = must_start_with + must_contain + must_end_with\n",
"ids = processor.tokenizer(text=name, padding=use_token_padding, return_tensors=\"pt\")\n",
"text_features = model.get_text_features(**ids)\n",
"text_features = text_features / text_features.norm(p=2, dim=-1, keepdim=True)\n",
"#------#\n",
"sim = 0\n",
"if(use == '🖼️image_encoding from image'):\n",
" logit_scale = model.logit_scale.exp()\n",
" torch.matmul(text_features, image_features.t()) * logit_scale\n",
" sim = torch.nn.functional.cosine_similarity(text_features, image_features) * logit_scale\n",
"#-----#\n",
"if(use == '📝text_encoding from prompt'):\n",
" sim = torch.nn.functional.cosine_similarity(text_features, text_features_A)\n",
"#-----#\n",
"best_sim = sim\n",
"best_name = name\n",
"name_B = must_contain\n",
"#------#\n",
"results_sim = torch.zeros(ITERS*NUM_PERMUTATIONS)\n",
"results_name_B = {}\n",
"results_name = {}\n",
"#-----#\n",
"for iter in range(ITERS):\n",
" dots = torch.zeros(min(list_size,RANGE))\n",
" is_trail = torch.zeros(min(list_size,RANGE))\n",
"\n",
" #-----#\n",
"\n",
" for index in range(samples_per_iter):\n",
" _start = START\n",
" id_C = random.randint(_start , _start + RANGE)\n",
" name_C = db_vocab[f'{id_C}']\n",
" is_Prefix = 0\n",
" #Skip if non-AZ characters are found\n",
" #???\n",
" #-----#\n",
" # Decide if we should process prefix/suffix tokens\n",
" if name_C.find('</w>')<=-1:\n",
" is_Prefix = 1\n",
" if restrictions != \"Prefix only\":\n",
" continue\n",
" else:\n",
" if restrictions == \"Prefix only\":\n",
" continue\n",
" #-----#\n",
" # Decide if char-size is within range\n",
" if len(name_C) < min_char_size:\n",
" continue\n",
" if len(name_C) > min_char_size + char_range:\n",
" continue\n",
" #-----#\n",
" name_CB = must_start_with + name_C + name_B + must_end_with\n",
" if is_Prefix>0:\n",
" name_CB = must_start_with + ' ' + name_C + '-' + name_B + ' ' + must_end_with\n",
" #-----#\n",
" if(use == '🖼️image_encoding from image'):\n",
" ids_CB = processor.tokenizer(text=name_CB, padding=use_token_padding, return_tensors=\"pt\")\n",
" text_features = model.get_text_features(**ids_CB)\n",
" text_features = text_features / text_features.norm(p=2, dim=-1, keepdim=True)\n",
" logit_scale = model.logit_scale.exp()\n",
" torch.matmul(text_features, image_features.t()) * logit_scale\n",
" sim_CB = torch.nn.functional.cosine_similarity(text_features, image_features) * logit_scale\n",
" #-----#\n",
" if(use == '📝text_encoding from prompt'):\n",
" ids_CB = processor.tokenizer(text=name_CB, padding=use_token_padding, return_tensors=\"pt\")\n",
" text_features = model.get_text_features(**ids_CB)\n",
" text_features = text_features / text_features.norm(p=2, dim=-1, keepdim=True)\n",
" sim_CB = torch.nn.functional.cosine_similarity(text_features, text_features_A)\n",
" #-----#\n",
" #-----#\n",
" if restrictions == \"Prefix only\":\n",
" result = sim_CB\n",
" result = result.item()\n",
" dots[index] = result\n",
" continue\n",
" #-----#\n",
" if(use == '🖼️image_encoding from image'):\n",
" name_BC = must_start_with + name_B + name_C + must_end_with\n",
" ids_BC = processor.tokenizer(text=name_BC, padding=use_token_padding, return_tensors=\"pt\")\n",
" text_features = model.get_text_features(**ids_BC)\n",
" text_features = text_features / text_features.norm(p=2, dim=-1, keepdim=True)\n",
" logit_scale = model.logit_scale.exp()\n",
" torch.matmul(text_features, image_features.t()) * logit_scale\n",
" sim_BC = torch.nn.functional.cosine_similarity(text_features, image_features) * logit_scale\n",
" #-----#\n",
" if(use == '📝text_encoding from prompt'):\n",
" name_BC = must_start_with + name_B + name_C + must_end_with\n",
" ids_BC = processor.tokenizer(text=name_BC, padding=use_token_padding, return_tensors=\"pt\")\n",
" text_features = model.get_text_features(**ids_BC)\n",
" text_features = text_features / text_features.norm(p=2, dim=-1, keepdim=True)\n",
" sim_BC = torch.nn.functional.cosine_similarity(text_features, text_features_A)\n",
" #-----#\n",
" result = sim_CB\n",
" if(sim_BC > sim_CB):\n",
" is_trail[index] = 1\n",
" result = sim_BC\n",
" #-----#\n",
" #result = absolute_value(result.item())\n",
" result = result.item()\n",
" dots[index] = result\n",
" #----#\n",
" sorted, indices = torch.sort(dots,dim=0 , descending=True)\n",
" # @markdown ----------\n",
" # @markdown # Print options\n",
" list_size = 100 # param {type:'number'}\n",
" print_ID = False # @param {type:\"boolean\"}\n",
" print_Similarity = True # @param {type:\"boolean\"}\n",
" print_Name = True # @param {type:\"boolean\"}\n",
" print_Divider = True # @param {type:\"boolean\"}\n",
" print_Suggestions = False # @param {type:\"boolean\"}\n",
" #----#\n",
" if (print_Divider):\n",
" print('//---//')\n",
" #----#\n",
" print('')\n",
"\n",
" used_reference = f'the text_encoding for {prompt_A}'\n",
" if(use == '🖼️image_encoding from image'):\n",
" used_reference = 'the image input'\n",
" print(f'These token pairings within the range ID = {_start} to ID = {_start + RANGE} most closely match {used_reference}: ')\n",
" print('')\n",
" #----#\n",
" aheads = \"{\"\n",
" trails = \"{\"\n",
" tmp = \"\"\n",
" #----#\n",
" max_sim_ahead = 0\n",
" max_sim_trail = 0\n",
" sim = 0\n",
" max_name_ahead = ''\n",
" max_name_trail = ''\n",
" #----#\n",
" for index in range(min(list_size,RANGE)):\n",
" id = _start + indices[index].item()\n",
" name = db_vocab[f'{id}']\n",
" #-----#\n",
" if (name.find('</w>')<=-1):\n",
" name = name + '-'\n",
" if(is_trail[index]>0):\n",
" trails = trails + name + \"|\"\n",
" else:\n",
" aheads = aheads + name + \"|\"\n",
" #----#\n",
" sim = sorted[index].item()\n",
" #----#\n",
" if(is_trail[index]>0):\n",
" if sim>max_sim_trail:\n",
" max_sim_trail = sim\n",
" max_name_trail = name\n",
" max_name_trail = max_name_trail.strip()\n",
"\n",
" else:\n",
" if sim>max_sim_ahead:\n",
" max_sim_ahead = sim\n",
" max_name_ahead = name\n",
" #------#\n",
" trails = (trails + \"&&&&\").replace(\"|&&&&\", \"}\").replace(\"</w>\", \" \").replace(\"{&&&&\", \"\")\n",
" aheads = (aheads + \"&&&&\").replace(\"|&&&&\", \"}\").replace(\"</w>\", \" \").replace(\"{&&&&\", \"\")\n",
" #-----#\n",
"\n",
" if(print_Suggestions):\n",
" print(f\"place these items ahead of prompt : {aheads}\")\n",
" print(\"\")\n",
" print(f\"place these items behind the prompt : {trails}\")\n",
" print(\"\")\n",
"\n",
" tmp = must_start_with + ' ' + max_name_ahead + name_B + ' ' + must_end_with\n",
" tmp = tmp.strip().replace('</w>', ' ')\n",
" print(f\"max_similarity_ahead = {round(max_sim_ahead,2)} % when using '{tmp}' \")\n",
" print(\"\")\n",
" tmp = must_start_with + ' ' + name_B + max_name_trail + ' ' + must_end_with\n",
" tmp = tmp.strip().replace('</w>', ' ')\n",
" print(f\"max_similarity_trail = {round(max_sim_trail,2)} % when using '{tmp}' \")\n",
" #-----#\n",
" #STEP 2\n",
" import random\n",
" #-----#\n",
" for index in range(NUM_PERMUTATIONS):\n",
" name_inner = ''\n",
" if index == 0 : name_inner = name_B\n",
" if index == 1: name_inner = max_name_ahead\n",
" if index == 2: name_inner = max_name_trail\n",
" if index == 3: name_inner = name_B + max_name_trail\n",
" if index == 4: name_inner = max_name_ahead + name_B\n",
" if index == 5: name_inner = max_name_ahead + name_B + max_name_trail\n",
" if name_inner == '': name_inner = max_name_ahead + name_B + max_name_trail\n",
"\n",
" name = must_start_with + name_inner + must_end_with\n",
" #----#\n",
" ids = processor.tokenizer(text=name, padding=use_token_padding, return_tensors=\"pt\")\n",
" #----#\n",
" sim = 0\n",
" if(use == '🖼️image_encoding from image'):\n",
" text_features = model.get_text_features(**ids)\n",
" text_features = text_features / text_features.norm(p=2, dim=-1, keepdim=True)\n",
" logit_scale = model.logit_scale.exp()\n",
" torch.matmul(text_features, image_features.t()) * logit_scale\n",
" sim = torch.nn.functional.cosine_similarity(text_features, image_features) * logit_scale\n",
" #-----#\n",
" if(use == '📝text_encoding from prompt'):\n",
" text_features = model.get_text_features(**ids)\n",
" text_features = text_features / text_features.norm(p=2, dim=-1, keepdim=True)\n",
" sim = torch.nn.functional.cosine_similarity(text_features, text_features_A)\n",
" #-----#\n",
" results_name[iter*NUM_PERMUTATIONS + index] = name\n",
" results_sim[iter*NUM_PERMUTATIONS + index] = sim\n",
" results_name_B[iter*NUM_PERMUTATIONS + index] = name_inner.replace('</w>',' ')\n",
" #------#\n",
" #name_B = results_name_B[iter*NUM_PERMUTATIONS + random.randint(0,3)]\n",
" tmp = iter*NUM_PERMUTATIONS\n",
" _name_B=''\n",
" if results_sim[tmp+1]>results_sim[tmp+2]: _name_B = results_name_B[tmp + 3]\n",
" if results_sim[tmp+2]>results_sim[tmp+1]: _name_B = results_name_B[tmp + 4]\n",
"\n",
" if _name_B != name_B:\n",
" name_B=_name_B\n",
" else:\n",
" name_B = results_name_B[tmp + 5]\n",
"\n",
"#--------#\n",
"print('')\n",
"if(use == '🖼️image_encoding from image' and colab_image_path != \"\"):\n",
" from google.colab.patches import cv2_imshow\n",
" cv2_imshow(image_A)\n",
"#-----#\n",
"print('')\n",
"sorted, indices = torch.sort(results_sim,dim=0 , descending=True)\n",
"\n",
"for index in range(ITERS*NUM_PERMUTATIONS):\n",
" name_inner = results_name[indices[index].item()]\n",
" print(must_start_with + name_inner + must_end_with)\n",
" print(f'similiarity = {round(sorted[index].item(),2)} %')\n",
" print('------')\n",
"#------#\n",
"db_vocab.close() #close the file"
],
"metadata": {
"collapsed": true,
"id": "fi0jRruI0-tu",
"cellView": "form"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# @title (Optional) ⚡Actively set which Vocab list to use for the interrogator\n",
"token_name = \"\" # @param {\"type\":\"string\",\"placeholder\":\"Write a token_name used earlier\"}\n",
"VOCAB_FILENAME = 'tokens_most_similiar_to_' + token_name.replace('</w>','').strip()\n",
"print(f'Using a vocab ordered to most similiar to the token {token_name}')"
],
"metadata": {
"id": "FYa96UCQuE1U"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# @title 💫 Compare Text encodings\n",
"prompt_A = \"banana\" # @param {\"type\":\"string\",\"placeholder\":\"Write a prompt\"}\n",
"prompt_B = \"bike \" # @param {\"type\":\"string\",\"placeholder\":\"Write a prompt\"}\n",
"use_token_padding = True # param {type:\"boolean\"} <----- Enabled by default\n",
"#-----#\n",
"from transformers import AutoTokenizer\n",
"tokenizer = AutoTokenizer.from_pretrained(\"openai/clip-vit-large-patch14\",\n",
"clean_up_tokenization_spaces = False)\n",
"#-----#\n",
"from transformers import CLIPProcessor, CLIPModel\n",
"processor = CLIPProcessor.from_pretrained(\"openai/clip-vit-large-patch14\" , clean_up_tokenization_spaces = True)\n",
"model = CLIPModel.from_pretrained(\"openai/clip-vit-large-patch14\")\n",
"#----#\n",
"inputs = tokenizer(text = prompt_A, padding=True, return_tensors=\"pt\")\n",
"text_features_A = model.get_text_features(**inputs)\n",
"text_features_A = text_features_A / text_features_A.norm(p=2, dim=-1, keepdim=True)\n",
"name_A = prompt_A\n",
"#----#\n",
"inputs = tokenizer(text = prompt_B, padding=True, return_tensors=\"pt\")\n",
"text_features_B = model.get_text_features(**inputs)\n",
"text_features_B = text_features_B / text_features_B.norm(p=2, dim=-1, keepdim=True)\n",
"name_B = prompt_B\n",
"#----#\n",
"import torch\n",
"sim_AB = torch.nn.functional.cosine_similarity(text_features_A, text_features_B)\n",
"#----#\n",
"print(f'The similarity between the text_encoding for A:\"{prompt_A}\" and B: \"{prompt_B}\" is {round(sim_AB.item()*100,2)} %')"
],
"metadata": {
"id": "QQOjh5BvnG8M",
"collapsed": true,
"cellView": "form"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"You can write an url or upload a file locally from your device to use as reference. The image will by saved in the 'sd_tokens' folder. Note that the 'sd_tokens' folder will be deleted upon exiting this runtime."
],
"metadata": {
"id": "hyK423TQCRup"
}
},
{
"cell_type": "markdown",
"source": [
"\n",
"\n",
"# How does this notebook work?\n",
"\n",
"Similiar vectors = similiar output in the SD 1.5 / SDXL / FLUX model\n",
"\n",
"CLIP converts the prompt text to vectors (“tensors”) , with float32 values usually ranging from -1 to 1.\n",
"\n",
"Dimensions are \\[ 1x768 ] tensors for SD 1.5 , and a \\[ 1x768 , 1x1024 ] tensor for SDXL and FLUX.\n",
"\n",
"The SD models and FLUX converts these vectors to an image.\n",
"\n",
"This notebook takes an input string , tokenizes it and matches the first token against the 49407 token vectors in the vocab.json : [https://huggingface.co/black-forest-labs/FLUX.1-dev/tree/main/tokenizer](https://www.google.com/url?q=https%3A%2F%2Fhuggingface.co%2Fblack-forest-labs%2FFLUX.1-dev%2Ftree%2Fmain%2Ftokenizer)\n",
"\n",
"It finds the “most similiar tokens” in the list. Similarity is the theta angle between the token vectors.\n",
"\n",
"<div>\n",
"<img src=\"https://huggingface.co/datasets/codeShare/sd_tokens/resolve/main/cosine.jpeg\" width=\"300\"/>\n",
"</div>\n",
"\n",
"The angle is calculated using cosine similarity , where 1 = 100% similarity (parallell vectors) , and 0 = 0% similarity (perpendicular vectors).\n",
"\n",
"Negative similarity is also possible.\n",
"\n",
"# How can I use it?\n",
"\n",
"If you are bored of prompting “girl” and want something similiar you can run this notebook and use the “chick” token at 21.88% similarity , for example\n",
"\n",
"You can also run a mixed search , like “cute+girl”/2 , where for example “kpop” has a 16.71% similarity\n",
"\n",
"There are some strange tokens further down the list you go. Example: tokens similiar to the token \"pewdiepie</w>\" (yes this is an actual token that exists in CLIP)\n",
"\n",
"<div>\n",
"<img src=\"https://lemmy.world/pictrs/image/a1cd284e-3341-4284-9949-5f8b58d3bd1f.jpeg\" width=\"300\"/>\n",
"</div>\n",
"\n",
"Each of these correspond to a unique 1x768 token vector.\n",
"\n",
"The higher the ID value , the less often the token appeared in the CLIP training data.\n",
"\n",
"To reiterate; this is the CLIP model training data , not the SD-model training data.\n",
"\n",
"So for certain models , tokens with high ID can give very consistent results , if the SD model is trained to handle them.\n",
"\n",
"Example of this can be anime models , where japanese artist names can affect the output greatly. \n",
"\n",
"Tokens with high ID will often give the \"fun\" output when used in very short prompts.\n",
"\n",
"# What about token vector length?\n",
"\n",
"If you are wondering about token magnitude,\n",
"Prompt weights like (banana:1.2) will scale the magnitude of the corresponding 1x768 tensor(s) by 1.2 . So thats how prompt token magnitude works.\n",
"\n",
"Source: [https://huggingface.co/docs/diffusers/main/en/using-diffusers/weighted\\_prompts](https://www.google.com/url?q=https%3A%2F%2Fhuggingface.co%2Fdocs%2Fdiffusers%2Fmain%2Fen%2Fusing-diffusers%2Fweighted_prompts)\\*\n",
"\n",
"So TLDR; vector direction = “what to generate” , vector magnitude = “prompt weights”\n",
"\n",
"# How prompting works (technical summary)\n",
"\n",
" 1. There is no correct way to prompt.\n",
"\n",
"2. Stable diffusion reads your prompt left to right, one token at a time, finding association _from_ the previous token _to_ the current token _and to_ the image generated thus far (Cross Attention Rule)\n",
"\n",
"3. Stable Diffusion is an optimization problem that seeks to maximize similarity to prompt and minimize similarity to negatives (Optimization Rule)\n",
"\n",
"Reference material (covers entire SD , so not good source material really, but the info is there) : https://youtu.be/sFztPP9qPRc?si=ge2Ty7wnpPGmB0gi\n",
"\n",
"# The SD pipeline\n",
"\n",
"For every step (20 in total by default) for SD1.5 :\n",
"\n",
"1. Prompt text => (tokenizer)\n",
"2. => Nx768 token vectors =>(CLIP model) =>\n",
"3. 1x768 encoding => ( the SD model / Unet ) =>\n",
"4. => _Desired_ image per Rule 3 => ( sampler)\n",
"5. => Paint a section of the image => (image)\n",
"\n",
"# Disclaimer /Trivia\n",
"\n",
"This notebook should be seen as a \"dictionary search tool\" for the vocab.json , which is the same for SD1.5 , SDXL and FLUX. Feel free to verify this by checking the 'tokenizer' folder under each model.\n",
"\n",
"vocab.json in the FLUX model , for example (1 of 2 copies) : https://huggingface.co/black-forest-labs/FLUX.1-dev/tree/main/tokenizer\n",
"\n",
"I'm using Clip-vit-large-patch14 , which is used in SD 1.5 , and is one among the two tokenizers for SDXL and FLUX : https://huggingface.co/openai/clip-vit-large-patch14/blob/main/README.md\n",
"\n",
"This set of tokens has dimension 1x768. \n",
"\n",
"SDXL and FLUX uses an additional set of tokens of dimension 1x1024.\n",
"\n",
"These are not included in this notebook. Feel free to include them yourselves (I would appreciate that).\n",
"\n",
"To do so, you will have to download a FLUX and/or SDXL model\n",
"\n",
", and copy the 49407x1024 tensor list that is stored within the model and then save it as a .pt file.\n",
"\n",
"//---//\n",
"\n",
"I am aware it is actually the 1x768 text_encoding being processed into an image for the SD models + FLUX.\n",
"\n",
"As such , I've included text_encoding comparison at the bottom of the Notebook.\n",
"\n",
"I am also aware thar SDXL and FLUX uses additional encodings , which are not included in this notebook.\n",
"\n",
"* Clip-vit-bigG for SDXL: https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k/blob/main/README.md\n",
"\n",
"* And the T5 text encoder for FLUX. I have 0% understanding of FLUX T5 text_encoder.\n",
"\n",
"//---//\n",
"\n",
"If you want them , feel free to include them yourself and share the results (cuz I probably won't) :)!\n",
"\n",
"That being said , being an encoding , I reckon the CLIP Nx768 => 1x768 should be \"linear\" (or whatever one might call it)\n",
"\n",
"So exchange a few tokens in the Nx768 for something similiar , and the resulting 1x768 ought to be kinda similar to 1x768 we had earlier. Hopefully.\n",
"\n",
"I feel its important to mention this , in case some wonder why the token-token similarity don't match the text-encoding to text-encoding similarity.\n",
"\n",
"# Note regarding CLIP text encoding vs. token\n",
"\n",
"*To make this disclaimer clear; Token-to-token similarity is not the same as text_encoding similarity.*\n",
"\n",
"I have to say this , since it will otherwise get (even more) confusing , as both the individual tokens , and the text_encoding have dimensions 1x768.\n",
"\n",
"They are separate things. Separate results. etc.\n",
"\n",
"As such , you will not get anything useful if you start comparing similarity between a token , and a text-encoding. So don't do that :)!\n",
"\n",
"# What about the CLIP image encoding?\n",
"\n",
"The CLIP model can also do an image_encoding of an image, where the output will be a 1x768 tensor. These _can_ be compared with the text_encoding.\n",
"\n",
"Comparing CLIP image_encoding with the CLIP text_encoding for a bunch of random prompts until you find the \"highest similarity\" , is a method used in the CLIP interrogator : https://huggingface.co/spaces/pharmapsychotic/CLIP-Interrogator\n",
"\n",
"List of random prompts for CLIP interrogator can be found here, for reference : https://github.com/pharmapsychotic/clip-interrogator/tree/main/clip_interrogator/data\n",
"\n",
"The CLIP image_encoding is not included in this Notebook.\n",
"\n",
"If you spot errors / ideas for improvememts; feel free to fix the code in your own notebook and post the results.\n",
"\n",
"I'd appreciate that over people saying \"your math is wrong you n00b!\" with no constructive feedback.\n",
"\n",
"//---//\n",
"\n",
"Regarding output\n",
"\n",
"# What are the </w> symbols?\n",
"\n",
"The whitespace symbol indicate if the tokenized item ends with whitespace ( the suffix \"banana</w>\" => \"banana \" ) or not (the prefix \"post\" in \"post-apocalyptic \")\n",
"\n",
"For ease of reference , I call them prefix-tokens and suffix-tokens.\n",
"\n",
"Sidenote:\n",
"\n",
"Prefix tokens have the unique property in that they \"mutate\" suffix tokens\n",
"\n",
"Example: \"photo of a #prefix#-banana\"\n",
"\n",
"where #prefix# is a randomly selected prefix-token from the vocab.json\n",
"\n",
"The hyphen \"-\" exists to guarantee the tokenized text splits into the written #prefix# and #suffix# token respectively. The \"-\" hypen symbol can be replaced by any other special character of your choosing.\n",
"\n",
" Capital letters work too , e.g \"photo of a #prefix#Abanana\" since the capital letters A-Z are only listed once in the entire vocab.json.\n",
"\n",
"You can also choose to omit any separator and just rawdog it with the prompt \"photo of a #prefix#banana\" , however know that this may , on occasion , be tokenized as completely different tokens of lower ID:s.\n",
"\n",
"Curiously , common NSFW terms found online have in the CLIP model have been purposefully fragmented into separate #prefix# and #suffix# counterparts in the vocab.json. Likely for PR-reasons.\n",
"\n",
"You can verify the results using this online tokenizer: https://sd-tokenizer.rocker.boo/\n",
"\n",
"<div>\n",
"<img src=\"https://lemmy.world/pictrs/image/43467d75-7406-4a13-93ca-cdc469f944fc.jpeg\" width=\"300\"/>\n",
"<img src=\"https://lemmy.world/pictrs/image/c0411565-0cb3-47b1-a788-b368924d6f17.jpeg\" width=\"300\"/>\n",
"<img src=\"https://lemmy.world/pictrs/image/c27c6550-a88b-4543-9bd7-067dff016be2.jpeg\" width=\"300\"/>\n",
"</div>\n",
"\n",
"# What is that gibberish tokens that show up?\n",
"\n",
"The gibberish tokens like \"ðŁĺħ\\</w>\" are actually emojis!\n",
"\n",
"Try writing some emojis in this online tokenizer to see the results: https://sd-tokenizer.rocker.boo/\n",
"\n",
"It is a bit borked as it can't process capital letters properly.\n",
"\n",
"Also note that this is not reversible.\n",
"\n",
"If tokenization \"😅\" => ðŁĺħ</w>\n",
"\n",
"Then you can't prompt \"ðŁĺħ\" and expect to get the same result as the tokenized original emoji , \"😅\".\n",
"\n",
"SD 1.5 models actually have training for Emojis.\n",
"\n",
"But you have to set CLIP skip to 1 for this to work is intended.\n",
"\n",
"For example, this is the result from \"photo of a 🧔🏻♂️\"\n",
"\n",
"\n",
"<div>\n",
"<img src=\"https://lemmy.world/pictrs/image/e2b51aea-6960-4ad0-867e-8ce85f2bd51e.jpeg\" width=\"300\"/>\n",
"</div>\n",
"\n",
"A tutorial on stuff you can do with the vocab.list concluded.\n",
"\n",
"Anyways, have fun with the notebook.\n",
"\n",
"There might be some updates in the future with features not mentioned here.\n",
"\n",
"//---//\n",
"\n",
"https://codeandlife.com/2023/01/26/mastering-the-huggingface-clip-model-how-to-extract-embeddings-and-calculate-similarity-for-text-and-images/"
],
"metadata": {
"id": "njeJx_nSSA8H"
}
}
]
} |