aashish1904 commited on
Commit
81c3236
1 Parent(s): cace485

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +104 -0
README.md ADDED
@@ -0,0 +1,104 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ license: cc-by-nc-nd-4.0
5
+ datasets:
6
+ - laion/conceptual-captions-12m-webdataset
7
+ - CaptionEmporium/coyo-hd-11m-llavanext
8
+ - KBlueLeaf/danbooru2023-metadata-database
9
+ - graph-based-captions/GBC10M
10
+ language:
11
+ - en
12
+ pipeline_tag: text-generation
13
+ library_name: transformers
14
+
15
+ ---
16
+
17
+ [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
18
+
19
+
20
+ # QuantFactory/TITPOP-200M-dev-GGUF
21
+ This is quantized version of [KBlueLeaf/TITPOP-200M-dev](https://huggingface.co/KBlueLeaf/TITPOP-200M-dev) created using llama.cpp
22
+
23
+ # Original Model Card
24
+
25
+ # [WIP] TITPOP
26
+
27
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/630593e2fca1d8d92b81d2a1/ahcLBZhlELKGfue1GTFs8.png)
28
+
29
+ ### What is this
30
+ TITPOP is a tool to extend, generate, refine the input prompt for T2I models.
31
+ <br>It can work on both Danbooru tags and Natural Language. Which means you can use it on almost all the existed T2I models.
32
+ <br>You can take it as "pro max" version of [DTG](https://huggingface.co/KBlueLeaf/DanTagGen-delta-rev2)
33
+
34
+ ### Traning Detail
35
+ * Model Arch: LLaMA
36
+ * Size: 200M param
37
+ * Training Data:
38
+ * Danbooru Metadata: 7.8M entries
39
+ * CC12M/GBC10M: around 11M entries
40
+ * Coyo11M: around 11M entries
41
+ * Training Procedure:
42
+ * Danbooru + cc12m: 5epoch
43
+ * Danbooru: 1epoch
44
+ * Danbooru + cc12m + coyo11m: 3epoch (currently 2epoch, still training)
45
+ * Token Seen: Currently 35B token
46
+ * Cost Time: Around 2~3 week on 4x3090
47
+
48
+ ### How to use this model?
49
+ Although the official inference code with lot of formatting and automatic features is in private now.<br>
50
+ You can still try to make your own inference interface based on format below:
51
+ ```
52
+ quality: masterpiece
53
+ aspect ratio: 1.0
54
+ target: <|short|> <|tag_to_long|>
55
+ tag: 1girl, solo, dragon girl, dragon horns, dragon tail
56
+ ```
57
+ Then you will get output like:
58
+ ```
59
+ quality: masterpiece
60
+ aspect ratio: 1.0
61
+ target: <|short|> <|tag_to_long|>
62
+ tag: 1girl, solo, dragon girl, dragon horns, dragon tail, smile, ponytail, cleavage cutout, pointy ears, large breasts, black dress, white background, thighhighs, bare shoulders, tail, breasts, clothing cutout, simple background, blonde hair, long hair, blue eyes, looking at viewer, horns,
63
+ long: A young woman with blonde hair and cat ears on her head. she is wearing a black outfit with gold accents and has a sword in her right hand. the woman is sitting on top of a large orange snake that is coiled around her body. the snake appears to be attacking her, as if it is attacking her.
64
+ ```
65
+
66
+ All the supported mode is:
67
+ ```
68
+ None #Tags only, DTG mode
69
+ tag_to_long
70
+ long_to_tag
71
+ short_to_long
72
+ short_to_tag
73
+ tag_to_short_to_long
74
+ short_to_tag_to_long
75
+ short_to_long_to_tag
76
+ ```
77
+
78
+ ### Brief Explaination of Possible "Weird" Output
79
+ The model is trained on "What we used for training T2I model", which is basically caption from VLM.
80
+ <br>Since these VLM have lot of different hallucination, this project will also generate some content that "looks like have hallucination"
81
+ <br>But since the T2I model we want to use also trained on these kind of things, it can still generate descent image, or even better.
82
+
83
+ For example:
84
+ * Lot of animal ears/horns feature will be captioned as "cat ears" in most of VLM, include GPT4o or Claude3.5 sonnet.
85
+
86
+ So if you met some weird output which looks like conflicting with tags, try to generate image from it first.
87
+ <br> You should take the Natural Language part as "different English", since that's what we used for T2I currently...
88
+
89
+ ### Why inference code is private? When will it be open sourced?
90
+ 1. This model/tool is still under development, currently is early Alpha version.
91
+ 2. I'm doing some research and projects based on this.
92
+ 3. The model is released under CC-BY-NC-ND License currently. If you have interest, you can implement inference by yourself.
93
+ 4. Once the project/research are done, I will open source all these models/codes with Apache2 license.
94
+
95
+ ### Citation
96
+ ```bibtex
97
+ @misc{TITPOP2024,
98
+ author = {Shih-Ying Yeh},
99
+ title = {TITPOP: Text to Image with Text Presampling for Optimal Prompting},
100
+ howpublished = {\url{https://huggingface.co/KBlueLeaf/TITPOP-200M-dev}},
101
+ year = {2024},
102
+ note = {Still under development},
103
+ }
104
+ ```