Spaces:
Running
Gradio 4.x version available.
https://huggingface.co/spaces/John6666/t2i-multi-demo
I modified Gradio's load function when I was building this in reference to ToyWorld, so I backported it and got the following space to work in 4.x.
It's not perfect, but it works anyway.
https://huggingface.co/spaces/John6666/ToyWorld4
https://huggingface.co/spaces/John6666/PrintingPress4
Thanks, I'll try them out and hopefully upgrade at last!
Note that the format of launch()'s concurency limit or something has been changed, and the style= parameter for Columns has been eliminated.
The implementation is just a commented-out version of Gradio 4.x's attempt to fetch Examples.
So the model may load faster in 3.x.
I tried to create my own custom 3.x version of Gradio that just fixed the security vulnerabilities and left what already worked untouched and load it in spaces, but I'm not a coder or programmer, if such a thing existed we'd not need to make compromises.
I'm an amateur as well, and I'm not familiar with GUI and networking implementations, so I couldn't find a way to safely load 3.x Spaces.
The 4.x and 2.x versions seem to work by creating an endpoint, but the 3.x code for loading Spaces seemed to copy the entire contents (so I had to copy and paste PromptExtend manuallyπ
).
Well, if it doesn't work, just put it back.
Thanks, works for me! PrintingPress is finally on Gradio 4! I think the most important improvement is being able to use the scrollbars now, previously clicking on the scrollbar would kill the dropdown and the only way to fix it was to refresh the page, can't believe it was like that for so long.
For ToyWorld I guess it's time to kill that UI for Gradio 4, it'd be much faster to move on than to spend hours guessworking the new CSS values, it'll probably work better, anyway.
Good!
I guess bringing Gradio 4 to a space like this: https://huggingface.co/spaces/Yntec/Diffusion80XX would not be possible because they removed max choices from CheckboxGroup? I hope 4 is better at other things because all I see is the removal of useful features! ππ€£π€£
This will limit the number of selections itself, but I wonder why bother to cut compatibility?
It's not like it's a factor that creates vulnerability.
Instead of "then", we can use "success".
("success" should wait until the process finishes successfully, but it will not be parallelized instead.)
#model_choice = gr.CheckboxGroup(models, label = f'Choose up to {num_models} different models from the 866 available!', value = default_models, multiselect = True, max_choices = num_models, interactive = True, filterable = False)
#model_choice.change(update_imgbox, model_choice, output)
#model_choice.change(extend_choices, model_choice, current_models)
model_choice = gr.CheckboxGroup(models, label = f'Choose up to {num_models} different models from the 866 available!', value = default_models, interactive = True)
model_choice.change(lambda x: x[0:num_models], [model_choice], [model_choice])\
.then(update_imgbox, model_choice, output)\
.then(extend_choices, model_choice, current_models)
Also, I'm not sure if I have the order of update_imgbox and extend_choices right .γThe opposite might be correct.π
Okay, it worked.
Today is one of those days when I get a lot of build failure errors, but it went without a hitch.
Gradio 4.x has been fundamentally reworked from 3.x, but there are quite a few features that were dropped in the process. There are many other elements that are missing, and some that have been added.
https://huggingface.co/spaces/John6666/Diffusion80XX4
Thanks! That was quick! Any way to keep the model selection in a fixed place instead of jumping around? When one selects fewer models it jumps up because the image boxes disappear, and when one adds more back it jumps down again, that has been bugging me since the beginning. derwahnsinn's solution was to keep them side by side but they look so tiny one has to download them to enjoy them full size. If not, maybe update the number of images only after one clicks the generate button but leave them alone while selecting models? (instead of having all images disappear when one is selecting models)
Ah, I misunderstood the original operation.
It would be easier if we could just squeeze in 6 pieces when converting, instead of having the selection itself cancel.
I wonder if it would be better to make the latest six.
Since it is a bit difficult to choose the newest (there is no information on which one is the newest, so we need to make one), I decided to use the same specifications as the original, 6 from the beginning of the list.
We can make improvements if necessary.
There is a way to fix the location of images using Gradio's Gallery component, but the appearance of multiple images generated at the same time is not as good as the Image component...
I guess the best way is to specify the elem_classes and do something with CSS like this.
I can't think of the key CSS stuff...
CSS="""
.output { width=480px; height=480px; !important; }
"""
with gr.Blocks(css=CSS) as demo:
~~~
output = [gr.Image(label = m, min_width=480, show_download_button=True, elem_classes="output", show_share_button=True) for m in default_models]
'''
Well, the queue between images was a deal breaker for me because a model may take 400 seconds to time out and not show an image (if you unload the 6 ones and pick 6 unloaded ones from the bottom), and in the old one you'd still get an image from the other 5 models, but now you go into the second model and may wait for 400 more seconds to see it time out! (bringing the maximum wait time to 18 minutes) π - so it's more efficient to open 6 tabs and load the PrintingPress on them with different models for this... So I've saved the old ToyWorld UI at blitz_diffusion, killed the ToyWorld UI to make a backup of Diffusion80XX's old behavior over there, and implemented your changes so this one can run on Gradio 4. I've never seen such a thing since Vega, Balrog and M.Bison switched names! π€£
I appreciate your heroism, I didn't know where to even start and it's great to see how's it done.
I've just suffered through Gradio 4.x one step ahead of you!π±
By the time I started playing with AI, it was already 4.x.
https://www.gradio.app/guides/setting-up-a-demo-for-maximum-performance
That said, it would probably be smartest if we could set a timeout on the huggingface_hub.InferenceClient that is called inside gr.load().
Gradio itself does not seem to have the ability to handle threads or processes that have started running, which means we either manage them all on our own or use another approach to stop the process.
https://huggingface.co/docs/huggingface_hub/package_reference/inference_client
externalmod.py: line 117
# client = huggingface_hub.InferenceClient(
# model=model_name, headers=headers, token=hf_token,
# )
client = huggingface_hub.InferenceClient(
model=model_name, headers=headers, token=hf_token, timeout=60,
)
Timeouts were usually present, but not as effective for this purpose as I would have liked.
Maybe we should just go ahead and try adding 6 Galleries instead of an Image?
And then we could allow a series of button presses.
trigger_mode="multiple", concurrency_limit=5,
I've completed the 6 Galleries version, it's efficient but looks bad.
https://huggingface.co/spaces/John6666/Diffusion80XX4g
These three (six?) lines can be changed to switch between the normal version.
#output = [gr.Image(label = m, show_download_button=True, elem_classes="output", show_share_button=True) for m in default_models]
output = [gr.Gallery(label = m, show_download_button=True, elem_classes="output", interactive=False, show_share_button=True, container=True, format="png", object_fit="contain") for m in default_models]
~
#gen_event = gen_button.click(gen_fn, [m, txt_input], o)
gen_event = gen_button.click(gen_fn_gallery, [m, txt_input, o], o)
~
#model_choice.change(update_imgbox, model_choice, output)
model_choice.change(update_imgbox_gallery, model_choice, output)
I've completed the 6 Galleries version, it's efficient
That doesn't seem to be the case, despite apparently all boxes starting at the same time, they're taking the same time as in the Gradio 4 version that queues them, and one may timeout in 1 minute, and the next in 2 minutes, etc. The queue is still there, it's just hidden. Compare it to here: https://huggingface.co/Spaces/Yntec/ToyWorld where if the 6 models don't time out you get 6 images in 10 seconds and all appear simultaneously, the extra 3 minutes is for unloaded models.
Also, images may not be getting shown, a model may take 300 seconds to load and another 10 to make an image, on Gradio 3 you eventually see it instead of getting an error by just waiting, if the time keeps going up there's hope, on here it's beyond the 60 seconds timeout so even if it's made we don't get to see it, or the user assumes it's not coming and move on.
It feels like we're downgrading!
Thanks for your help and your work, but it's time to admit defeat for me, nobody is going to use the new version of Diffusion80XX if I have ToyWorld having the wanted behavior without problems, Gradio 4 clearly isn't there yet, in the future a custom Gradio 3 with only the vulnerabilities patched seems the way to go, because if it's not broken, don't fix it. I started maintaining these spaces because it was exciting and fun, the new problems introduced for no reason aren't fun, I don't think I'll be using my Gradio 4 spaces, they suffered from this: https://en.wikipedia.org/wiki/Enshittification and I can see why Omnibus and all the original maintainers of these spaces just quit or moved to other things.
Yes, the Queue itself is not canceled.
The Gallery version is for generating a series of button presses.
The advantage is that the images accumulate without being overwritten by successive hits, but it's hard to understand and the UI looks terrible.
I don't mind working to find a loophole, but it would be great if Gradio and HF upgraded in a more straightforward direction.π
So I implemented my own timeout process, since there is little chance of improvement anyway, even if we lament about it.
I am not very familiar with asynchronous processing, so it may not work well at first.
Someone said "Gradio can do the hard stuff easily, but the easy stuff is hard to do" and I agree.
If they would just let me kill the Queue by some means, that would be all.
https://huggingface.co/spaces/John6666/Diffusion80XX4
https://huggingface.co/spaces/John6666/Diffusion80XX4g
I get really upset because it's another chapter of me dealing with upgrades and losing, I'm still on Windows 7, Chrome 70, Winamp 5.51 and other old versions and Gradio 3 is like "not again!" Open source couldn't solve it.
Anyway, I can't give up big pictures but I've added your versions at the top of my Gradio 4 version at https://huggingface.co/spaces/Yntec/Diffusion80XX so people can try them all out and see what solution they prefer, there's 4 versions now but I guess that's a good option spaces provide. Huggingface remains the only thing we have and Gradio the only thing that works so I can only be grateful for everything that exists.
Sometimes there's no substitute for legacy software.
In Microsoft's case, they just add too many extras for business every time, although the kernel of their new OS is not bad.
The initial setup is too much work...π±
https://github.com/Raphire/Win11Debloat
The best way to achieve large images would be to use Gradio's Gallery or some similar custom component "just one" to achieve multiple image displays.
But so far, it is hopelessly incompatible with the parallel processing of HF's resources.
I guess we'll have to experiment haphazardly if we come up with a better way.
Therefore, I made a composition of 6 images and 1 gallery.
If I set visibility=False for the images, it would be 1 gallery, but I left it True for now because it is more fun to see the progress of the work.
(If I roll the gallery into the process, I get progress, but the image is not visible until the end of the process.)
The layout may need some modification.
https://huggingface.co/spaces/John6666/Diffusion80XX4sg
Oh yeah, that is clearly better than your other two versions! If you could somehow clean the gallery after the Generate button was pressed again so that the old generations didn't bury the new ones nor the more images you generate, the more the model selection moves to the bottom, I think this could be it! People generate hundreds of images and after a while scrolling down past them to get the new ones or select the models could be problematic!
The former was easy and was completed by simply changing "append" to "insert".
The latter should have been easy too, but maybe this is a Gradio or browser CSS related bug, and the scrollbars disappear when the Gallery height is specified.π
The current configuration looks like this.
CSS="""
.output { width=120px; height=120px; !important; }
.gallery { width=100%; height=512px; max_height=512px !important; }
"""
gallery = gr.Gallery(label="Output", show_download_button=True, elem_classes="gallery",
interactive=False, show_share_button=True, container=True, format="png",
object_fit="cover", columns=2) # , height=768
Very nice, you did it! I implemented your changes and increased the timeout to 300 (because images taking more than 2 minutes to generate were never getting shown), linked to your more complete version at the top and rebranded it to https://huggingface.co/spaces/Yntec/HuggingfaceDiffusion ! Cheers!
We did it!
I formatted the code a little bit. It will be easier to make changes.
https://huggingface.co/spaces/John6666/Diffusion80XX4sg
Heh, I feel like all I did was point and critique, I need to work in my impostor syndrome! π Applied your changes, it turns out that I was counting digiplay/WhiteDreamyHillMix twice and my spaces have never been at 874 models yet! But with this I won't be off by 1 anymore (as soon as I actually add the 874th model I claim I have.)
The list type of python isn't unique.
It is rather common.π
That being said, I think you're right that the list editing is manual.
There are models that seem to work but don't work with the API, popular models have multiple versions of the same model, and it's a lot of work to do it all automatically.
(You may be able to find simple duplicates by looking at the hash of the unet, but if they are quantized or LoRA'd, it's useless, and since the seed is not fixed in the API, it's hard to tell by generating tests.)
And in the end, there is no way to control the issue of personal preferences.
You might as well just write 800+ or 700+.
Or I can just give people more models than I announce. It's not nice when the title says "874" models and you get 873 only, but it's fine when you announce "888" but give 889. This just happened accidentally, I still don't know from where the discrepancy comes from (apparently I fail at counting 1 by 1! π€£π€£) but this new code that counts the models has been very useful.
Indeed! I'll make some code to count the elements of the list later.
To be honest, there are a few spaces here and there where I have an unmanageable number of models myself...π€―
The funny thing is 95% of my users don't ever switch models, they use the top one or the 6 preselected ones from ToyWorld/HugginfaceDiffusion, when a model is at the seventh spot, its usage tanks. A few months ago I could have said I could prune 850 models from my spaces and it would barely make a difference, but I guess these became the only way to use most of these models after the Inference API on their pages wouldn't open at all unless a space like this uses them.
It's simply visually interesting. I guess that's partly why users are somewhat satisfied with it.
The Serverless Inference API practically doesn't work for people who just want to generate pictures, but I have a lot of trouble testing models before publishing them... I hardly even test them anymore. It makes uploading a breeze, though!π
If it's a server load issue for HF as a whole, they could easily make it OK for 30 minutes or so after the upload...
In the recent call for requests for HF in the Post, there were quite a few Serverless-related requests.
https://huggingface.co/posts/victor/964839563451127
That said, it just so happens to be related to the 6 models, and I've added a random pick feature.
Also, I forgot to tell you, but when I was messing with the LLM relationship, I found out that I could pass detailed parameters. (As usual, it was a change of a few lines, or rather a few letters...)
In SD, it's things like negative prompts, width, height, number of steps, etc.
So, I've added all sorts of functions.
Since many people may not understand the details, why don't you put default values to neg_prompt like 'value=βbad anatomyβ' or something like that? It will do no harm.
https://huggingface.co/spaces/John6666/Diffusion80XX4sg
Wow, that discussion thread is like peeking into a different world, I had no idea those discussions were taking place, I guess the thing I miss the most is digiplay uploading models, he's the best curator of models I've seen and set the example I always tried to reach. He stopped when the serverless API became closed by default, "your model needs to be popular to become popular" seems like a conundrum and I basically abuse my spaces to promote my models.
In SD, it's things like negative prompts, width, height, number of steps, etc.
Nice, is there any way to implement them without requiring a HF_Token? I guess that's the only reason those features wouldn't be in my spaces, most of the people duplicating them wouldn't know how to set up a token, I had seen them duplicating Nymbo's spaces and creating non-working spaces because they require a token. It's also curious that my biggest wish has always been seed control, imagine if the 6 images you generate from different models came from the same seed and you could really see their difference instead of seeing random seeds! Or being able to recreate a picture, but with a small change in the prompt, I always assumed that was Pro-account only, now I no longer know.
why don't you put default values to neg_prompt like 'value=βbad anatomyβ' or something like that?
In my experience negative prompts are too powerful, having anything in them cuts a big chunk of the images a model can generate, so I'm actually grateful they weren't available in these spaces originally so I could find out about it, and have dedicated to make many model versions that have optimal performance without needing to use negative prompts (for instance, my AnythingV7 model's focus was to not require any negative prompt and perform as well as Anything3/4/4.5/5 with "EasyNegative" in there), though I guess all of the performance looks terrible now when compared with something like Flux, ha!
I had no idea those discussions were taking place,
That's what's wrong with HF. They said there's no point in a survey if you don't put it in some kind of notice that everyone sees first!
If HF doesn't have a notification function, make one first.
He stopped when the serverless API became closed by default,
Ack, I'm sad too because I'm the one who started copying Yntec, Digiplay, Niggender and a few more.
"your model needs to be popular to become popular" seems like a conundrum and I basically abuse my spaces to promote my models.
We'll just have to wait for someone to discover it on another social networking site, forum, or Discord and get the buzz going.
Serverless was a nice feature to help with that...
Nice, is there any way to implement them without requiring a HF_Token?
Huh? You don't need tokens, because I used the same technique as the HF Diffusion one the other day.
The HF_TOKEN line in the code for that space is in response to a demand from someone who wanted to try out a model that was strictly forbidden to be reprinted in a private repo, and not used for anything else.
Unfortunately, there is no way to specify Seed.
(If you do it in Zero GPU space, of course you can, since it's the same as on a PC, but that's not what we're talking about.)
It's not originally in the Serverless Inference API functionality. I'll just make a request later.
though I guess all of the performance looks terrible now when compared with something like Flux, ha!
The vast majority of the world's population has yet to even play a little with generative AI. Also, I live in Japan, where the penetration rate of generative AI is by far the lowest among the 30+ richest countries. (China is over 95% and Japan is a whopping 60%!)
Japan has small houses and a not culture? of buying cheap PCs (We can buy one from AliExpress or Amazon if we want to...). 70% of the population is either poor or too busy to do anything but use their smartphones.
Many companies are having a hard time finding enough people who can use a PC.
What I'm trying to say is that it's just a bitch about the Japanese environment and that the SD 1.5 era models are not as good in performance, but they have the potential to work well on smartphones with only 32GB or 64GB storage, non-powerful PCs and RaspberryPI, and there is a movement to re-evaluate!
I'm not familiar with it, but was it MLX? I think it's quantized for or something like that.
Also, I forget where, but there are several groups trying out enhanced architectures for SD and SDXL, and it looks like UNET will at least be useful in the future.
All right! We've got a tool that won't help most people.
It can be python or text, but if you copy and paste a list with model names and hit Submit, the tool will return an error and the python code with the error line commented out.
https://huggingface.co/spaces/John6666/model_list_checker
Example:
https://huggingface.co/spaces/John6666/Diffusion80XX4sg/blob/main/all_models.py
β
Error:
Duplicate elements in ' "digiplay/CampurSari_Gen1",'
Duplicate elements in ' "digiplay/WhiteDreamyHillMix_v1", #220'
Repo doesn't exist in ' "xyn-ai/anything-v4.0", #"andite/anything-v4.0",'
Repo doesn't exist in ' "etherealxx/systemy-csrmodel-cutesexyrobutts", #"andite/cutesexyrobutts-diffusion",'
Repo doesn't exist in ' "sd-dreambooth-library/true-guweiz-style", # "andite/guweiz-diffusion",'
I have a lot of trouble testing models before publishing them...
Well, since I can't release merged models without checking what they do I had to use a private space like this one: https://huggingface.co/spaces/Yntec/Anything7.0-Webui-CPU but with the SuperMerger and other A1111 extensions that allow me to test models. It takes 15 minutes to generate an image (unless servers are busy) so it may take hours of testing them and a model like RetroFunk may never see the light of day, but back when this started that seemed like a miracle! (free unlimited image generations from any model that existed) I was never convinced by any model after the SD1.5 architecture, and it's actually relieving that people are moving on, because they were releasing models faster than i could test them, like this there's hope that I'll be done one day.
Huh? You don't need tokens
Ah, you did find the buried treasures in the documentation! I'm going to implement those features in my spaces ASAP (it's funny some of the models I uploaded were only to allow 768x768 generations over the original 512x512 ones, which is nothing if the users can specify the dimensions.)
It's not originally in the Serverless Inference API functionality. I'll just make a request later.
I have yet to get used to this magic, that would be incredible but I never thought to request it!
What I'm trying to say is that it's just a bitch about the Japanese environment and that the SD 1.5 era models are not as good in performance
Here in Mexico the gates of AI image generation were opened with Meta.ai, as most people that have a smartphone have the Whatsapp application to make free phone calls to each other and it included the AI in it that allows users to request pictures to it and get them drawn. The model is kinda at the level of SDXL, so that's something, but going to bing image generator and make requests over there would be too advanced for people around here. The biggest entry barrier is the language, I'm the only english speaking guy around my area and even the english teachers don't know what they're teaching, relying on memorization where you have to remember "the bicycle was blue" as opposed to learning about vehicles and colors.
there are several groups trying out enhanced architectures for SD and SDXL
I'm still dreaming that SD1.5 gets updated to gain the new prompt adherence and performance of Flux! I can make Flux output a nice pizza if I enter several detailed paragraphs about it and tell it exactly what ingredients I want and where they are placed, but in a model like Yntec/Stuff I can just prompt for a pizza and get all that detail! I think Flux will get obsoleted by a model that does both things well, I hope for a model that can read the weights of models of other architectures to improve its outputs, a model built to be backward compatible with all the finetunes and loras by the community.
Thanks, that will be very useful, and it explains the discrepancy as I have another model twice on the list.
Well, since I can't release merged models without checking
I'd love to have the space to try it out with the safetensors file in place. I could build my own, but I ain't got room on my Zero GPU...
Days of trying totally different programs in the same space.
I'm not wealthy, but I'm not hurting for $10, so I could create multiple accounts, but it's a pain in the ass, right?
Ah, you did find the buried treasures in the documentation! I'm going to implement those features in my spaces ASAP (it's funny some of the models I uploaded were only to allow 768x768 generations over the original 512x512 ones, which is nothing if the users can specify the dimensions.)
I thought SDXL was fixed at 1024 sq. ft. but SD1.5 was at 512 sq. ft. and SD2 was at 768 sq. ft on HF serverless Inference in Default.
Here in Mexico the gates of AI image generation were opened with Meta.ai, as most people that have a smartphone have the Whatsapp application to make free phone calls to each other and it included the AI in it that allows users to request pictures to it and get them drawn.
Japan is not there yet...
You can think of this country as North Korea when it comes to copyright regulations. Anyway, the major media companies have a strong say, and the generated AI apps are still treated more like underground apps.
And even if they are allowed to use them, there are still feature phones (Android in a skin that looks like a feature phone) among the elderly. There are also young mechanically illiterate people.
The biggest entry barrier is the language, I'm the only english speaking guy around my area and even the english teachers don't know what they're teaching,
Seriously?
I thought the hurdle for English was low in Latin and Germanic speaking countries.
In Japan, as it is known worldwide for some generations, almost the entire population learns English for at least 6 years and 10 years if they go to college, yet more than 95%+ of the population has no clue about English. I could say they are allergic.
The language barrier is the biggest problem here as well, which is ultimately why I devote so many resources to TAG and LLM related issues.
By the way, I also have no clue about listening and speaking. Because who do I talk to and who do I listen to in English?
That's the way of life in this country. I can do reading. In writing, DeepL is my friend.π This guy spits out some bad English if I don't rework it, though.
I'm still dreaming that SD1.5 gets updated to gain the new prompt adherence and performance of Flux! I
Flux, but dev and schnell are both detuned versions to begin with except for Pro. Even SD3 was released in Medium.
So if there is a Small version, it is probably SDXL, and an even smaller one is SD1.5.
In short, it is so structurally simple that it could be done tomorrow, depending on demand.
With Diffusers and ComfyUI, those who are a little more knowledgeable could even create their own modified architecture, and I've seen objects that look like that on HF.
But it would definitely be fun if it emerged as one of the new de facto standards, not just an individual attempt!
Kolors is very close to that.
Thanks, that will be very useful, and it explains the discrepancy as I have another model twice on the list.
I'm glad it served its purpose.
It was useful for me to discover that Gradio's JSON objects are surprisingly useful.
I think I was able to create a template for myself that can be applied in various ways.
Hello.
The Serverless Inference API will (eventually) support the seed specification feature.π
The server side should not work yet, but I've already created the options and execution for HF Diffusion and my space, so please update your code in preparation for the official launch.
There is not much difference in the code, so you can probably do it manually.
https://huggingface.co/posts/victor/964839563451127#66d1d7d46accd34f7500d78f
It's coming! https://github.com/huggingface/api-inference-community/pull/450
https://github.com/huggingface/api-inference-community/pull/450
Support seed in diffusion models #450
Thanks, those are great news!
Well, the only people who would understand the value of Seed would be artists and people who enjoy working with models. (Like us).
Now if we can get the upscaler and ADetailer to work with the Inference API, then we can all use cloud services to draw images for release. (If you can draw, that is. I can't, because I'm not artistic.π)
If you can draw
Oh, haha! Sorta? Back in 2012 I became addicted to Drawception and spent the next 9 years drawing things over several accounts:
https://drawception.com/player/213913/red-panda/drawings/1/
https://drawception.com/player/338581/vytron/drawings/1/
https://drawception.com/player/373957/ovyron/drawings/1/
https://drawception.com/player/373961/blue-panda/drawings/1/
https://drawception.com/player/509114/slothbert/drawings/1/
It's a game quite similar to AI image generation where you request a drawing to a human, but then the point becomes for someone else to describe that drawing, which becomes the new prompt to draw, and then we see a full completed game that is like broken telephone but with pictures, at the start it was the funniest thing I've ever seen, but eventually it became about the art itself.
First I quit because of the drama situation about "avatars", where if you had a yellow cat as an avatar some people would go crazy and make you lose your account accusing you of drawing yourself, and then I quit again because they implemented a virtual currency, the "ducks" that made playing all about the ducks, and that wasn't fun anymore (originally it was about leveling and playing with players of higher levels producing higher level games, when that was removed games with experienced players disappeared and you were always forced to play with the newbies for lower quality games.) Ironically my accounts were so old that I had thousands of ducks to compensate, but it didn't feel right to spend more than i was getting, and the fear of eventually going to 0, which ruined it all because the way to keep your ducks is to not play (or, create yet more new accounts that get plenty of ducks for free...)
My dream was to one day gather all my drawings and make a Lora out of it so people could draw in my style, but... I reckon someone already put Drawception's images in a dataset and finetuned some model, because https://huggingface.co/Yntec/Emoticons can do a style that is very close:
masterpiece, top quality, best quality, official art, beautiful and aesthetic,8k, best quality, masterpiece, a adorable grey raccoon hugging heart, simple_background, black_background, full_body
(Sans the hugging heart...)
The color banding style was a staple of drawception because of its limited color palette.
cute raccoon!
It's a game quite similar to AI image generation where you request a drawing to a human, but then the point becomes for someone else to describe that drawing, which becomes the new prompt to draw,
Hmm? I thought I had never heard of this on a Japanese forum, but there were zero hits in Japanese. It's often called Galapagosization, and that's exactly what it is!
Thanks to that, there are elephant turtles (anime, hentai, etc.), though.
Generally speaking, a search in English yields 10 times more results than in Japanese (considering the number of speakers, of course).
Instead, image boards, pixiv, and Twitter were all the rage when it came to pictures, I guess. The reason these couldn't become Danbooru is because they were only using pictures for interaction and didn't have the culture or functionality to caption the details of the picture. China did it for us instead. hehehe.
where if you had a yellow cat as an avatar some people would go crazy and make you lose your account accusing you of drawing yourself,
It's scary, especially for those who are immersed in the Internet, because they may really believe that.
Over here, there is a notorious criminal who killed over 30 people in an arson attack because he believed he was being robbed.
a virtual currency, the "ducks" that made playing all about the ducks,
Oh, it's a death flag to community.
Just to let you know about my AI and internet environment, my favorite forum is almost dead due to the introduction of unnecessary functions near this after it was vandalized.
Well, the feature was just a tome, and it's barely up and running, at least. It is one of the longest-lived Internet services.
Two big BBSs, their wiki, X, a few news sites, researchers, developers, and a few useful blogs buried in the mass of crap on top of Google, that's about all the Japanese resources on AI. They too generally publish their actual models in English on Civitai or HF (sometimes Japanese or mega.nz). If they do it in Japanese in Japan, copyright holders, secondary creators, and their followers will try to beat them up. (Sometimes I see people be flamed, βHe used an AI picture!")
one day gather all my drawings and make a Lora out of it so people could draw in my style, but... I reckon someone already put Drawception's images in a dataset and finetuned some model,
Me too, half of the characters I initially wanted were in Animagine 3.1. Almost the other 50% I could easily get models and LoRAs for, and I'm ready to generate them just by collecting them!
Well, it's been a while since I've had the motivation to program, and I enjoy collecting them, so I don't mind.π