I'm guessing you were fixing an official SD 1.5 case and got an error.
Maybe this will work.
I'm sleepwalking so I could be wrong.

Yntec changed pull request status to merged

Thanks, the space builds and runs fine, but when clicking generate it time outs after 5 seconds, so people generate pictures but they're not shown because it has already shown the error and stopped listening. The underlying code is functionally equivalent to what was there before so I need to investigate what happens (if this fix doesn't work.)

Nope, still same problem, it timeouts after 5 seconds instead of waiting for the picture to be generated, so it's generated but the user never gets to see it. Will investigate after properly releasing the RadiantDiversions model (before its inference API closes and I can't generate samples anymore!)

Something is fundamentally wrong. I'll try to duplicate it on my end in a bit.

I think HF's server setup or status is screwed up!

ValueError: Could not complete request to HuggingFace API, Status Code: 500, Error: unknown error, Warnings: ['CUDA out of memory. Tried to allocate 30.00 MiB (GPU 0; 14.75 GiB total capacity; 1.90 GiB already allocated; 3.06 MiB free; 1.95 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF', 'There was an inference error: CUDA out of memory. Tried to allocate 30.00 MiB (GPU 0; 14.75 GiB total capacity; 1.90 GiB already allocated; 3.06 MiB free; 1.95 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF']

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/gradio/routes.py", line 321, in run_predict
    output = await app.blocks.process_api(
  File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1015, in process_api
    result = await self.call_function(fn_index, inputs, iterator, request)
  File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 856, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "/usr/local/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
  File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2177, in run_sync_in_worker_thread
    return await future
  File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 859, in run
    result = context.run(func, *args)
  File "/home/user/app/app.py", line 1938, in send_it1
    output1=proc1(inputs)
  File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 812, in __call__
    outputs = utils.synchronize_async(
  File "/usr/local/lib/python3.10/site-packages/gradio/utils.py", line 375, in synchronize_async
    return fsspec.asyn.sync(fsspec.asyn.get_loop(), func, *args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/fsspec/asyn.py", line 103, in sync
    raise return_result
  File "/usr/local/lib/python3.10/site-packages/fsspec/asyn.py", line 56, in _runner
    result[0] = await coro
  File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1015, in process_api
    result = await self.call_function(fn_index, inputs, iterator, request)
  File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 856, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "/usr/local/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
  File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2177, in run_sync_in_worker_thread
    return await future
  File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 859, in run
    result = context.run(func, *args)
  File "/usr/local/lib/python3.10/site-packages/gradio/external.py", line 282, in query_huggingface_api
    raise ValueError(
ValueError: Could not complete request to HuggingFace API, Status Code: 500, Error: unknown error, Warnings: ['CUDA out of memory. Tried to allocate 30.00 MiB (GPU 0; 14.75 GiB total capacity; 1.90 GiB already allocated; 3.06 MiB free; 1.95 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF', 'There was an inference error: CUDA out of memory. Tried to allocate 30.00 MiB (GPU 0; 14.75 GiB total capacity; 1.90 GiB already allocated; 3.06 MiB free; 1.95 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF']
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/gradio/routes.py", line 321, in run_predict
    output = await app.blocks.process_api(
  File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1015, in process_api
    result = await self.call_function(fn_index, inputs, iterator, request)
  File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 856, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "/usr/local/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
  File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2177, in run_sync_in_worker_thread
    return await future
  File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 859, in run
    result = context.run(func, *args)
  File "/home/user/app/app.py", line 1938, in send_it1
    output1=proc1(inputs)
  File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 812, in __call__
    outputs = utils.synchronize_async(
  File "/usr/local/lib/python3.10/site-packages/gradio/utils.py", line 375, in synchronize_async
    return fsspec.asyn.sync(fsspec.asyn.get_loop(), func, *args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/fsspec/asyn.py", line 103, in sync
    raise return_result
  File "/usr/local/lib/python3.10/site-packages/fsspec/asyn.py", line 56, in _runner
    result[0] = await coro
  File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1015, in process_api
    result = await self.call_function(fn_index, inputs, iterator, request)
  File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 856, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "/usr/local/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
  File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2177, in run_sync_in_worker_thread
    return await future
  File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 859, in run
    result = context.run(func, *args)
  File "/usr/local/lib/python3.10/site-packages/gradio/external.py", line 282, in query_huggingface_api
    raise ValueError(
ValueError: Could not complete request to HuggingFace API, Status Code: 500, Error: unknown error, Warnings: ['CUDA out of memory. Tried to allocate 30.00 MiB (GPU 0; 14.75 GiB total capacity; 1.90 GiB already allocated; 3.06 MiB free; 1.95 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF', 'There was an inference error: CUDA out of memory. Tried to allocate 30.00 MiB (GPU 0; 14.75 GiB total capacity; 1.90 GiB already allocated; 3.06 MiB free; 1.95 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF']
 Your Space is using an old version of Gradio (3.15.0) that is subject to security vulnerabilities. Please update to the latest version.

This may have been the death of Gradio 3.15.0, it works on Gradio 3.23.0, but that one breaks the UI, which is what I'm trying to archive! This backup by PeepDaSlan9 works: https://huggingface.co/spaces/PeepDaSlan9/B2BMGMT_ToyWorld - I guess if he tries to update it, it will break (if you duplicate that one, it won't build because runway/stable-diffusion-1.5 won't be found, if you do that and try to update the model list so it builds, it'll break like this one.)

I guess I'll let this one die and link to PeepDaSlan9's backup and accept the loss of 152 models that can't be used there, I wish I had made a more recent backup myself, heh, really, at the end I'm hanging on to the UI looks and the 20 hearts this space had, all functionality can be used elsewhere. Can I build a previous version of this space instead of the most recent code? That would solve it, because, reverting the changes doesn't fix it, but yesterday before I updated it worked fine, if I could only build that one...

Hmmm... if this is a trap to kill the vulnerable version of Gradio, it looks like a server crash type message...
Anyway, I just did a port similar to HFD already. It's part of the testing. Still testing.

Should we rejoice or mourn...
It seems to be able to generate.
https://huggingface.co/spaces/John6666/blitz_diffusion4

So, shall we start our journey of porting Gradio 3.x to 4.x in earnest?
It's not that unrealistic, since there are not that many basic forms (the same we have to do if it's TestGen-based).

I forgot that Gradio's gr.load doesn't have a good exception handling. It crashes every time.
In the test environment, I didn't notice it because I was reducing the number of models because of so many restarts. Fixed it.

to-do list

  • Re-arrange UI to 4.x. (Mostly you.)
  • Add a negative prompt or some other feature. (Mostly me.)
  • Make a complaint to victor or someone.
  • Once the UI is refined, find 3.x space and backport it from one end to the other and walk away.

The deal breaker for me and Gradio 4.x is that they put you into a queue after everybody else using the space, that doesn't make sense if I select, say, Yntec/LeyLines, nobody else is using that model so I should be able to use it without waiting for the other users to get their images! Gradio 3.x would let me use it already, here I get "There's a long queue of users, duplicate the space to skip", but why? I could also just make it private so nobody gets in the queue before me and I could use all models first, I will never support functionality like that, and that's the reason I never use my PrintingPress and HuggingfaceDiffusion spaces at all. I am hostile towards Gradio 4.x on principle.

The whole idea of open source is being able to make our own fork of Gradio and use that instead, they allow it by instead of using a Gradio directly on the Readme you use a Dockerfile: https://discuss.huggingface.co/t/how-to-install-a-specific-version-of-gradio-in-spaces/13552 (in that thread there's a link that explains why blitz_diffusion4's UI breaks: https://github.com/gradio-app/gradio/issues/3203 - Custom CSS broke in gradio 3.16.0) - I explored that possibility when ToyWorld wasn't building because fetching the models was taking more than 30 minutes, some time later they kicked me out of github because I couldn't use two-factor identification to login into my account, so it became a dead end.

The question is, is it worth the time? I assumed people would stop using the space and it'd go to sleep after 2 days, like Noosphere WebUI CPU, I can assume that happened as I throw the towel in every message instead of having to work to fix it πŸ˜‚πŸ˜‚πŸ€£

That's a mystery...I knew they seem to manage requests based on IP, but I wonder if they also separate them by UA (3.x or 4.x). And if that is why 3.x is relatively quiet.
Or maybe when I ported, I removed the space startup option (since the parameters are not compatible between 3.x and 4.x), so the queue is not set up properly.

https://www.gradio.app/guides/setting-up-a-demo-for-maximum-performance

In the meantime, I've reproduced the options that I think I can reproduce in 4.x.
I don't know if it works or not!

Well, that's it.
If it's not 4.x that's acting weird, it's our settings, then we're the idiots.
So we'll try.

Well, basically, it's all the fault of those who cut off syntax compatibility between 3.x and 4.x.
Structural compatibility can't be helped. Sometimes software works better if it is rebuilt from the ground up.
But they could have made the syntax and options backward compatible, though!

Oh yeah, I have confirmed they have stopped supporting Gradio 3.x, or at least they stopped supporting gr.Interface.load, see this space:

https://huggingface.co/spaces/EX4L/GPU-stresser-t2i

It works perfectly, you send a prompt, get 9 images generated simultaneously with no queues (the models may time out still, but that's a different issue). If you duplicate the space, it'll break, and if EX4L makes any change to the code, or the space goes to sleep and is restarted, it'll break.

Make a complaint to victor or someone.

Maybe that's the solution? They made a change today that broke Gradio 3.x compatibility, if it could be reverted the problems would solve themselves.

The worst part is that the request is accepted and the images generated but not shown, if you do it and go to ToyWorld and select the same models, and send the same prompts, you'll get the images instantly because they were generated and cached.

Perhaps we need an externalmod.py version that forces it to timeout after 500 seconds instead of 5, that should be enough to get the images.

They made a change today that broke Gradio 3.x compatibility

I just want to be sure.
There is a version of 3.x that works fine, right?
Also, if it works if you don't restart it (I checked in the space above too), that means it's failing on the first load, not actually an error in generation.

If that's the case, this isn't a complaint or anything, it's an error report case.

There is a version of 3.x that works fine, right?

All spaces already running versions using Gradio 3.x work fine because they are using what they had before the change, any future built versions of spaces using Gradio 3.x will break and will show our errors because they will be using what they have now after the change.

I reckon a version of externalmod.py that runs on Gradio 3.x would be enough, as this one isn't backward compatible:

image.png

(this space is meant to preserve the classic UI, the Gradio 4 UI is different so it'd defeat the point)

externalmod.py that runs on Gradio 3.x would be enough

That's exactly what the 3.x built-in does, and the point of the change is just that it doesn't go to cache Examples...
In other words, the already built-in external.py should be that thing.
I can change it enough to add negative prompts and such, but we're not talking about that, we're talking about whether or not it works in the first place, right?

I feel like it's being generated, not just cached, and most importantly, I'm not getting the error I just got...
https://huggingface.co/spaces/John6666/GPU-stresser-t2i-error

So I was duplicating the space again to prepare for the error report, and I'm wondering if anything has been fixed? Is it fixed?
https://huggingface.co/spaces/John6666/blitz_diffusion_error

So I was duplicating the space again to prepare for the error report, and I'm wondering if anything has been fixed? Is it fixed?

No, the image will appear instantly if it was generated and cached, if you try a new prompt it'll timeout after 5 seconds.

The big difference between https://huggingface.co/spaces/Yntec/blitz_diffusion/ on gradio 3.x and https://huggingface.co/spaces/John6666/blitz_diffusion4 is that the Gradio 4 version has timeout=inference_timeout set to 300, if such a thing was implemented on the Gradio 3.x version perhaps it'd solve it.

https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/tNVPlmhcnbJaPG7Q4L4dq.png
By the way, the code in this image is a branch of the part of the code that takes in another Space from Space, which has already completely changed its structure, and there is probably nothing that can be done about it. It is not the developers' fault. (It would be faster to port Space).

Other parts of external.py were not so different between 4.x and 3.x. (Except for the fatal call method change and the fatal flaw of relentlessly wanting to generate beautiful sunset when the model is loaded!)
The problem is this one.

timeout=inference_timeout set to 300, if such a thing was implemented on the Gradio 3.x version perhaps it'd solve it.

That is in app.py... not externalmod.py...
No, I don't mind making it, but I don't think that's going to fix it this time.

Wait, so is it possible to delete all this and just query your space to get the images? In the same way I don't generate a magic prompt but query it from spaces/Yntec/prompt-extend? This would solve it all and I wouldn't even need to fetch the models because you already did that over at blitz_diffusion4!

Hmm? Aren't you a little confused?
I don't know what's going on.

It's true that 3.x and 4.x don't work as differently as everyone thinks, except that you can't import other spaces and the syntax itself is different.

I'm confused too... but by the way, I did write a draft of the report.

I'm already sleepy, so I've posted.
https://huggingface.co/posts/John6666/191046582909567

Yeah, so, I have this space: https://huggingface.co/spaces/Yntec/prompt-extend - it gets some text and outputs enhanced text that improves the prompt.

blitz_diffusion offers the capability to do that, but instead of doing it itself, it sends the text to Yntec/prompt-extend, gets its output, and shows it to the user.

If instead of generating images, blitz_diffusion sent the user's text to John6666/blitz_diffusion4 to make an image and showed it to the user, it wouldn't matter if Gradio 3.x can inference images or not, because it would still be able to use Gradio 4 spaces to generate them and show them to the user, for something functionally equivalent to what we want to achieve!

Maybe it's not possible, but if it was that'd be a way to solve it.

Thanks again for all your effort and dedication, it'd be cool if one wakes up tomorrow and it works without needing more coding, I'll continue merging an uploading models because that's the fun part for me, the road to hosting 900 models has been more slumpy than expected!

Oh, that's how it is. Let's build the functionality in! That would be easy. Maybe I can make it before I go to bed. 50-50.

Not working... well, that's ok, it could be my fault.
Isn't Transformers stopped in same error earlier...?🀒
Actually, the method of calling it is almost the same.

Oh, haha! So it turned out to be unrelated to Gradio and new spaces using image inference are suffering from the same problem? That's relieving, I mean, once it's fixed, it's fixed and should work everywhere.

Not working...

Maybe it'd work if the whole system worked. Meanwhile, the workaround is to request an image to a model, live with the error, and go to the model card that will have its serverless inference API opened for long enough to request it there. It may show unknown error there and one has to wait for the CUDA memory to recover, but that's something that was happening before anyway.

https://huggingface.co/posts/John6666/191046582909567#66da39ce947b2e8abcf76615
Haha!
That's not even close.
Take a look.
It's not just Gradio, everything is down!
If you look at the forums, you'll see a hellhole!

Well, if it's this badly broken, I'm sure it'll be fixed soon. It's got to be fixed.

That said, I made a modified version of 4.x look like 3.x, and I put externalmod3.py on the 3.x blitz builtin. Of course, whatever anyone does won't work at this moment, though!
About GUI modification of 4.x, but I think it looks a lot more like 3.x, except that there is no color for the title of each component and I have no idea where the black background is specified (there's no line that looks like it).

Well, it makes me feel better that we couldn't solve it the other day, since it was impossible! I'm also glad that it actually works (if you keep spamming the button every 5 seconds and it doesn't cause the CUDA out of memory error you eventually get the cached image), and, most of all, that spaces that are already running were not affected, I was this close to updating ToyWorld and that would have broken it and it'd have been a huge mistake!

Happy for now since I'm still able to do everything I could do before these problems started, I'm just worried about the users of my spaces that can't get updated new models in.

Huh? It's working if you don't reboot?
I think maybe the whole area space or all the infreferences through the server are busted since this morning.

Is there some sort of separate condition of occurrence over here...
The server admin-ish person said it sounds like a separate issue, so maybe it is.

BTW, am I just getting a black background because I always use the dark theme?
If so, should I not specify black?
How does 3.x handle dark and light themes? 4.x seems to depend on each distributed theme...
https://huggingface.co/spaces/John6666/blitz_diffusion_builtin 3.x
https://huggingface.co/spaces/John6666/blitz_diffusion4 4.x (3.x imitated style)

Huh? It's working if you don't reboot?

Yeah, you can try: https://huggingface.co/spaces/Yntec/ToyWorld and see how it works just like before (RetroRetro is running out of memory currently, though), but duplicating it or updating it would break it again. New spaces are timing out after 5 seconds, that's why I still recommend trying forcing a longer timeout somewhere so the errors disappear.

If so, should I not specify black?

My question is, from where does Nymbo/Nymbo_Theme come from? I would just need to create a Yntec/Yntec_Theme that looks like blitz_diffusion UI and with that jump use Gradio 4 directly.

you can try: https://huggingface.co/spaces/Yntec/ToyWorld

It works just now!

My question is, from where does Nymbo/Nymbo_Theme come from?

Any Gradio themes from HF. I just used it because it is a dark theme that is easy to see and has no deep meaning.
https://huggingface.co/spaces/Nymbo/Nymbo_Theme

The instructions for making the theme are on the Gradio page, but the language is not Python.
It's not that I'm good at Python, but I've been using Python so much lately that it takes me a while to get used to thinking in a different language.
https://www.gradio.app/guides/theming-guide

Ah, gotcha! So I can answer this question now:

BTW, am I just getting a black background because I always use the dark theme?

It's not black, it's #0b0f19, it's coming from this code at spaces/Nymbo/Nymbo_Theme :

Nymbo code

So you'd need to duplicate his space, in your version you change those lines for #76635a, and we should be able to get rid of those black backgrounds! In theory...

with gr.Blocks(fill_width=True, css=css) as myface:

I didn't put the Nymbo theme in there.
https://huggingface.co/spaces/John6666/blitz_diffusion4

Something black is the original 3.x. What should it look like originally?
https://huggingface.co/spaces/John6666/blitz_diffusion_builtin

That being said, it doesn't seem like it would be hard to modify the theme to make it.
From what I've seen, it looks like it could be made quite cool just by specifying CSS.

The question is which theme to base it on.
I want one that looks close and is stable.
Incidentally, the prevalent themes are the Miku and Applio themes. They have fewer bugs. I wonder if Applio looks too different.
https://huggingface.co/spaces/NoCrypt/miku
https://huggingface.co/spaces/Hev832/Applio-Theme

Oh, I was wrong again!

Anyway, can you take a screenshot of the black thing you see? I'm seeing this:

image.png

Maybe it's picking up system colors? The main differences are that in blitz_diffusion4 doesn't show the the "Advanced" text of blitz_diffusion_builtin:

image.png

And the Select Model and Your Prompt colors are different, that's it!

(by the way, I'm going to sleep, in case you don't hear from me in a while!)

Good night. I'll upload the screenshots here soon.

Windows 10 Chrome

blitz_chrome.png

Windows 10 Firefox

blitz_firefox.png

The body of the theme is just a JSON file. (It's like a configuration file written in JavaScript, often used in Python, and Diffusers' configuration file is also JSON)
I'll do the final touches manually, referring to miku's theme, which is probably full of know-how on how to avoid bugs, but it would be easier to create a draft separately.

I remembered that there is a convenient space for making drafts. You can tweak it as you see fit when you wake up.
Don't expect me to have any sense of UI or design, I'm a Windows wallpaper default or blackish or just plain black kind of guy.
https://huggingface.co/spaces/prithivMLmods/THEME-BUILDER

That's so weird! I don't see any black around here!

Something black is the original 3.x. What should it look like originally?

Here's a screenshot of how it looks around here, and how's it supposed to look like:

Blitz Diffusion UI

(The font is what the user has specified on their browser, I use comic fonts)

I remembered that there is a convenient space for making drafts. You can tweak it as you see fit when you wake up.

Oh, right, I explored that possibility back when I made the UI, but I never found a way to make the gradients that I needed.

Don't expect me to have any sense of UI or design

Haha! Me neither! Here's someone making a thread about how ugly the UI we're trying to save looks like: https://huggingface.co/spaces/Yntec/ToyWorld/discussions/2 - and it's not just him, another guy comes and thinks it's ugly too, but says what matters is the functionality, not how it looks, so you may be talking to the only person in the world that likes the design, other users tolerate it.

That's so weird! I don't see any black around here!

Oh? The screenshot above is still closer to the 4.x I recreated. I wonder if there was some kind of change in the CSS specs. In the past, HTML specs changed frequently, but I thought it had been stable for a while now...

Haha! Me neither!

Seriously... why are so many people so bad at UI...😭
I don't know why so many people I know are so bad at UI... Well, it's rare to find someone who is good at it.

and it's not just him, another guy comes and thinks it's ugly too,

A theme would be useful for that. There's also custom CSS, which not many people use, and I basically agree with you about the public trend of separating design and function. I don't want to create the design, though!

gradients

Don't tell me it's not possible in the first place with the theme...?
I'll do some manual experimentation.

Seriously... why are so many people so bad at UI...😭

Well, I'm good at making UIs that I like, here's what I made for my Winamp:

Winamp UI

Perhaps my problem is my taste in colors, most people seem to like bland colors that don't stand out, usually close to black or white, never the deep violets or teals.

Don't tell me it's not possible in the first place with the theme...?

I also couldn't replicate text shadows on them, the idea would be to use themes to change what you can't change manually, so they should be superfluous if you find a way to change everything manually.

I'm comfortable with black backgrounds due to my early days of DOS...
I used to love the One And Only Amp skin in Winamp. Blackish, of course.

I think I've got a gradient.
There is no reason why it can't be done when Miku is on the Miku theme to begin with.
https://huggingface.co/spaces/John6666/Yntec_Theme

Oh nice, I never thought about doing it that way, specifying a gradient as a color.

Gradient code

I'm comfortable with black backgrounds due to my early days of DOS...

I spent years of my life learning programming, Clipper and Turbo C, useless knowledge now! I was never fan of the black background of DOS and found the light backgrounds of Windows 3.11 the way to go. Ironically the windows theme that I use has black backgrounds and white text for my file explorer, though it turns out my whites are actually a light shade of pink when you compare them to whites on a phone screen.

Well, that kind of knowledge is surprisingly useful. You don't even know it yourself.
Programming knowledge is useful for logical thinking outside of programming, and whether or not you use it for programming is actually a trivial matter.
I know the breadth of the use of knowledge to the extent that I regret almost falling asleep during class. I think this is even more true when I look at LLM these days.

I just thought that many people who are not good at UI are those who have CLI experience.
Windows 3.x was white. 95 was kind of blue.

That said, look at 0.0.3. It's based on the Miku theme, but it has a weird scrollbar bug in Gradio 4.42.0 or later.
The only changes I've made are the gradients and the VSCode standard JSON formatting.

Maybe the JSON shaping is the culprit, but compressing again doesn't fix it...?
Is there information lost in the process of shaping, or is there a manner of compressing...
No, it's JSON, so if it doesn't read the same whether it's there or not, it's a bug! Again!
I already know it's faster to find a way to deal with it on the user side, so I'll just look for the right one.

0.0.4 is the Miku theme 1.2.2 itself (matching byte by byte), but it bugs me...
0.0.5 is a 0.0.3-like version with the compressed state.
So, it must be a bug in Miku theme too... what's going on...

Gradio 4.42.0 space using Miku theme is not buggy, Nymbo is buggy.
The author is using 3.x where he put the Miku theme, is that why...? Is that possible...?
I mean, is there a solution... or can we call it a solution to go to 3.x and load from 4.x...?

I never thought about doing it that way, specifying a gradient as a color.

I never, too. So I took it from Miku.
I tried porting Miku's app.py and it is still buggy.
Currently in the process of degrade.

Windows 3.x was white. 95 was kind of blue.

I think Windows 7 with its transparent glossy windows beat everything else, it's not white or blue or any other color, it's the color of what is beneath it, up to the color of the wallpaper if it's the last window on there. It sucks when I can't use things like Photoshop Generative Fill because they don't support my OS anymore, though.

it's a bug! Again!

I'm glad you don't mind dealing with bugs, whenever I find one I'd rather abandon the project or revert to a previous version that didn't have it, Gradio 3.x was state of the art and has yet to be matched, the person that decided to rewrite it must be fired, or they should just make sure future versions remain backward compatible so people of the future don't deal with our nightmares.

I mean, is there a solution... or can we call it a solution to go to 3.x and load from 4.x...?

That would be great if we can get it to work! For some reason even after spamming every 5 seconds blitz_diffusion_builtin never loads a picture, if we could find the file they changed back in September 3 and use that one instead it would work, the reason spaces that aren't rebooted still work is because they have that file cached, rebuilding them fetches the new file that causes errors after 5 seconds as they introduced the bug, or that's my theory, can we see the files they use for serverless inference at github or something? From where did you get the original externalmod.py?

I wonder why they stopped transparent windows.
There were so many fans of that one.

From where did you get the original externalmod.py?

Like this.
https://github.com/gradio-app/gradio/blob/main/gradio/external.py
https://github.com/gradio-app/gradio/blob/v3.15.0/gradio/external.py

can we see the files they use for serverless inference at github or something?

Partially possible. (Like the parameter-related processing part. They already had supported Seed).
But as I said, the actual server configuration is not supposed to be publicly available.
https://github.com/huggingface/huggingface_hub/tree/main/src/huggingface_hub/inference

https://huggingface.co/spaces/John6666/Yntec_Theme
I don't know, but it's working. Yosh!
genbaneko_se.jpg.webp

If we just keep modifying and building based on this version, it should work without any problems for now, right? I mean, if it doesn't work now, we're all screwed.
If that happens, they should fix it.

https://github.com/huggingface/huggingface_hub/tree/main/src/huggingface_hub/inference

Okay, so what is this?

https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_generated/_async_client.py

It says failure? Can't we run a custom _async_client.py that is the version without this change and maybe it'll work again? The commit dates seem close.

I don't know, but it's working. Yosh!

Nice! There is hope!

https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/inference/_client.py
_async_client.py is an auto-generated file, and this one seems to be the original. I wonder if I can find the cause of the bug by following this file.
In any case, the biggest problem is that it is useless without it on the server.
Well, if I find a bug, can someone call HF staff?
I feel like I've been debugging all week...

I think github allows you to send a pull request that would fix the bug directly, if their servers use that code it should fix it. Also, I think I've been too harsh to Gradio 4.x I just remembered I said this:

it works on Gradio 3.23.0

So we're not fighting against the error that doesn't allow pics to generate, we're fighting against the CSS changes that no longer worked on 3.16.0 breaking the UI. Hopefully with your theme we can say bye to Gradio 3.15 for good!

I haven't been programming properly for so long that I don't have a github or Discord account yet. It's like I'm still in rehab.
In a super-cool way, I'm like the Heroic Spirits of the Fate series. In other words, I haven't adapted to the modern world.

That said, I've been going back through the commits and there are very few commits that seem to be related to t2i or the operation in general. Not entirely, but there was nothing that seemed to be limited to 3.x. (if not explicitly).
If I had to guess, it's a slight change in the handling of JSON requests, but it's probably irrelevant since the behavior was similar even from my own InferenceClient version.

your theme

No, I'm not joking, but seriously, you please make the big picture. Like general coloring, fonts, detailing, etc.
I'll replace the gradients and 4.x/3.x stuff that is a programming pain later.
I think it's the fastest way to chimera the whole picture and the details.

Or just tell me in natural language or CSS what you want to change overall.

I haven't adapted to the modern world.

Í don't even have a phone, I got one and found the screen was too small and I bought a 10 inch tablet but I just couldn't find a way to fit it into my life, so the only way to make contact with me is to knock into my door, or... make a comment in one of my models/spaces, I guess.

Or just tell me in natural language or CSS what you want to change overall.

Ah! We're four changes away from success! For https://huggingface.co/spaces/John6666/blitz_diffusion4 :

image.png

Here the 10 would be white.

image.png

This dropdown's background should be teal colored instead of white and the "Select Model" text would be orange.

image.png

Instead of white, the box would be a teal gradient and the "Your prompt text" in orange.

image.png

Here this button would say "Advanced Button" so people know they can access advanced options by clicking it.

And that would be it! It would look like blitz_diffusion_builtin.

Í don't even have a phone,

Our country is full of smartphones, especially iPhones, because the major carriers kept giving away iPhones for almost free. Over 50% of all smartphones are iPhones!
(They recently stopped because of the weak yen, high global prices, and skyrocketing iPhone prices, plus they were pissed off at the government. Then the number of Android phones started to increase)
Thus, we have become a country where we can do almost anything with a smartphone app. (Though we are one of the least developed countries in the world in terms of digitization.)

So I haven't used a PC much recently, partly because I've been busy. I even forgot how to type half the time.
I use my 10-inch tablet for reading. I'm not sure if I'd use a smartphone as reading.

Ah! We're four changes away from success!

Perfect. This is the kind of information I wanted. I don't know what 3.x is supposed to look like.
Maybe I can make it by the end of tomorrow.

I forgot something important. Let's say this is for the dark theme. What kind of design should I use for the light theme?
All dark theme?

I'm not sure if I'd use a smartphone as reading.

I just quit reading completely, with the advancements of Text to Speech reading seems like a waste of time because you can just send the text to a TTS and have a guy reading you the book while you do other things. On the other hand I spend a lot of time reading 4chan's archives, specially those pertaining to the release of Flux and SDXL, so I guess that counts as reading.

Perfect. This is the kind of information I wanted. I don't know what 3.x is supposed to look like.

John6666/blitz_diffusion_builtin has it spot on so you can copy it, if it could have generated the images we'd have been done with this a day ago.

Maybe I can make it by the end of tomorrow.

That sounds awesome, it looks like blitz_diffusion will be the most advanced of my spaces with all the new features!

I forgot something important. Let's say this is for the dark theme. What kind of design should I use for the light theme?

Ah, the one from https://huggingface.co/spaces/Yntec/ToyWorldXL would be it, it switches teal for blue-magenta backgrounds and the golden buttons for orange ones.

send the text to a TTS

Good use. But I don't know why I didn't like Audible when I tried it... But nowadays TTS can do more and more, and we can customize it ourselves if we want, so it's a good thing.

4chan's archives

As someone from an anonymous message board, I can tell you that search engines only catch a fraction of the content. Not all of it is crawled properly.
By the way, I come from the kind of forum where Shinzo Abe is played with as a soccer ball. We treat petit mild Hitler as a freebie meme. It's like wearing a Trump mask on Halloween. Dead people, though.

the one from https://huggingface.co/spaces/Yntec/ToyWorldXL would be it

I think we can manage that.
Well, I'm off to bed for tomorrow.

But I don't know why I didn't like Audible when I tried it...

Yeah, the reader's voice is very important. Luckily around here we listen to books in Spanish, and use the Loquendo software. The Jorge's voice is really good, and by accident I found out that if you edit the text and add things like '', . at the end of a sentence it changes the inflection, so it can sound cheery or if you put a ". at the end it sounds sad, etc. so I just changed punctuation like ! and ; on the text for those for a very interesting reading delivery. When Loquendo reads a joke like that and it makes you laugh you know you've solved it! Many Youtube videos like Grand Theft Auto parodies used it because it managed to be hilarious! It's 11 years old tech by now but we never moved on to AI TTS because this was good enough.

that search engines only catch a fraction of the content

Yeah, I rely on desuarchive, this is where I'm at right now: https://desuarchive.org/g/thread/94786479/ - and I was THERE at that point, reading them in real time, but they left me behind. The same happened with Flux's release, I was reading them and participating in real time but then I was left 12 days behind in the archives, I guess the same will happen and I'll be a year behind in the future, it's just way too much content, and I'm only following one general!

We treat petit mild Hitler as a freebie meme

Ah, that reminded me of my Hitler drawing from https://drawception.com/game/15cAKdWPqX/nyan-hitler-the-movie/

Nyan Hitler

I still remember when people didn't get as offended by stuff. I guess I never found a community to belong to, it's just huggingface and anonymous boards over there, Reddit is fun until they shadowban you.

Well, I'm off to bed for tomorrow.

Thanks and I wish you a good rest for the night! Though maybe you already woke up when reading this.

Yeah, the reader's voice is very important.

There are many kinds of software such as Yukkuri, a well-known speechreading software that has been around for 10 years, Vocaloid such as Hatsune Miku for music, and Zundamon for the newer Yukkuri? However, in Japanese, kanji is not a phonetic character, so it can be mispronounced!
And I don't think it evolved specifically for reading.
We have a lot of good voices. We are a country with a lot of voice actors. But unlike many countries, we are unique in that the population of speakers is almost always equal to the population of the country, so if we want software related to our language, we have to make it domestically...
And we can't divert English-based technology...
But software development in our country by large companies is either a parasitic business for the government and large corporations, or they can only make money by gambling on social games. (That's the general socio-economic structure in Japan.)
Except for the small companies, indies, and the game companies that were at their peak 20 years ago.

Anyway, maybe I am just unaware of it, but I am sad to say that Japanese software related to languages, which requires large capital, is generally not very good. It's all well and good until the inventor starts running, but then it either ends or continues on with little or no further investment in the inventor.

it's just huggingface and anonymous boards over there,

ACTUALLY SAME.
I happened to be too busy when Twitter was starting to take off and missed out on participating, and I've never been on the SNS again.

Well, let me preface by saying that I finally got around to starting Gradio's theme development, but when I went to read the manual, I was astonished. Take a look at this. It's buggy!πŸ™€
https://www.gradio.app/guides/theming-guide
themebug.png

The overall theme, except for the text color specification, is almost complete.
Let me know if you have color suggestions for the secondary and cancel buttons. Also, text color.
https://huggingface.co/spaces/John6666/Yntec_Theme

It wasn't quite finished, but I think we got as far as the beta version.

https://huggingface.co/spaces/John6666/Yntec_Theme RC1?
https://huggingface.co/spaces/John6666/blitz_diffusion4 Theme applied

Alright, just woke up, good morning!

This is what I'm seeing:

image.png

Looks like the Light theme colors are leaking into the Dark theme ones! I would also suggest adding a button to the UI so the user can toggle between dark and light modes, and that would be it, we've never been this close to doing it!

I would also suggest adding a button to the UI so the user can toggle between dark and light modes,

It's nice, but it's not a feature of the theme, and the only way to add it is to modify app.py.

Looks like the Light theme colors are leaking into the Dark theme ones!

I know it well because I wrote it, but if you look throughout the code, you won't find any place that specifies purple except in the light theme.
So that is the only possibility. That's the only way...
Although there is never a place where the light theme and the dark theme intersect. (if there is, it crashes with an error!)

Well, the CSS in the blitz is still there, it's just mixed up with the light mode, and if I turn off the CSS in my code it will be either light or dark.
We'll have to leave the Gradio bug alone anyway, so let's work on the design at the theme repository for now.
I made up the secondary button and the cancel button in a crude way, but the design still needs to be tweaked. (text color, hover, etc.)

But I have a simple question, why is the display so different when we're using Firefox or Chrome on Windows?
The OS version may be older, but the browser version is probably the same.

P.S.

Good morning. It's 10:00 p.m. here.

Okay, this is close enough, I should know how to make it work from here! My idea is to have two versions of this theme, one where it leaks like this and the space uses the light theme and another that leaks in the opposite way and the space uses the dark theme, that should do it.

Thanks, I'm implementing all the changes of blitz_diffusion4 so finally we can have this space running again!

two versions of this theme

Well, that's a good way. What's not there won't work, but it won't malfunction either!
All those dark themes are actually there, and this idea is very easy to implement. All I have to do is copy and paste a few lines and I'm done.

finally we can have this space running again!

There's one more piece of good news for you.
One of the changes we saw yesterday in the Inference API was that wait_for_model is now on by default in 4.x as well.
This means that if a model takes a long time to load, it will no longer abort on its own. This means that there is no longer any point in trying hard to implement 3.x-style requests with JSON. (since everything is the same except for this option)

By the way, I don't think this guy is the culprit behind 3.x being busted. It wasn't that big of a change. It just changed the default values. There's no β€œrepel because it's not there” process in there either.

This means that there is no longer any point in trying hard to implement 3.x-style requests with JSON.

I'm not advanced enough to understand that phrase, but it sounds like good magic!

By the way, I don't think this guy is the culprit behind 3.x being busted

Gradio 3.15 and versions like 3.23 were busted, but Gradio 3.46 still works and is still kicking ass! The old UI wouldn't work on it so it's moot, but it's so relieving ToyWorld can remain using the old queue system that I love.

https://huggingface.co/spaces/John6666/YntecDark
https://huggingface.co/spaces/John6666/blitz_diffusion4 Purple is vanished!

There were more lines than I expected...
Let's just finish today's work for now. If possible, please give me a detailed color specification. #ffffff style.

Gradio 3.46

That's a version I don't see very often. Vulnerability is what this guy has, probably.
So that's not why 3.15 was banned.
I mean, that was not a ban. No system administrator would ban someone for a message like that.
No, actually, there are some, but only on anonymous forums.

We did it! It looks so cool, that was quite the journey!

That's a version I don't see very often

It was the minimum required version for spaces/derwahnsinn/TestGen to work, it was the first space around that allowed you to use many image models simultaneously by ticking them unlike spaces/RdnUser77/SpacIO_v1 that had some confusing dropdown way of selecting them. Back then we didn't have this technology and https://huggingface.co/allknowingroger was having to create almost 400 spaces with 9 models each so people could use them! Also, if people wanted to use a model, they were forced to use the other 8 of the spaces with it.

That code wouldn't work on Gradio 3.15 and that's the reason I had to update the PrintingPress and Huggingface (formerly Diffusion60XX) to 3.46 and give up the UI, though now that it's a theme I may be able to make them use it, hmmm.

Vulnerability is what this guy has, probably.

Oh yeah, a Gradio 3.46 version with the vulnerabilities fixed remains in the cards, if we don't find a way to make queues work like they do in Gradio 3.x in Gradio 4.x, but all spaces are running to that can wait for a while.

In case you're going to sleep at night I'll wish you to have a good rest!

Good morning?
Today, for now, I've cut as much of Blitz's code as I can. If this doesn't change the look too much, the theme will be a success...
I encountered a mysterious bug where the buttons are treated as secondary for some reason. I reassigned it to primary and solved the problem.
https://huggingface.co/spaces/John6666/blitz_diffusion4

That code wouldn't work on Gradio 3.15

Oh, really? That's a very important piece of information, isn't it?
If it's inference part is the source of inference efficiency, I might learn something important if I read the code.
I was under the impression that TestGen would work with the 3.15 code.

This is awful.
The render is an improvement, but the alias option is buggy only in 3.15.

# 3.15
def from_model(model_name: str, api_key: str | None, alias: str, **kwargs):

        "text-to-image": {
            # example model: osanseviero/BigGAN-deep-128
            "inputs": components.Textbox(label="Input"),
            "outputs": components.Image(label="Output"),
            "preprocess": lambda x: {"inputs": x},
            "postprocess": encode_to_base64,
        },

    if alias is None:
        query_huggingface_api.__name__ = model_name # alias is str, so it can never be None and is unreachable here
    else:
        query_huggingface_api.__name__ = alias
# 3.46
def from_model(model_name: str, hf_token: str | None, alias: str | None, **kwargs): # Fixed

        "text-to-image": {
            # example model: osanseviero/BigGAN-deep-128
            "inputs": components.Textbox(label="Input", render=False),
            "outputs": components.Image(label="Output", render=False),
            "preprocess": lambda x: {"inputs": x},
            "postprocess": encode_to_base64,
        },

    if alias is None:
        query_huggingface_api.__name__ = model_name # Fixed.
    else:
        query_huggingface_api.__name__ = alias

In theory, if we make sure to pass the model name to the alias option in 3.15, the behavior will be almost the same as in 3.46, except for render.
Also, the WebSocket library is not used in 3.46. I have not investigated the relevance of this to the bug.

However, as far as I can see, except for the UI, it would be better to move to 3.46.
The older version is simply buggy. The new version has another bug, though.

We'll do the light theme later.
By the way, if I were to make an externalmod for 3.x, which version do you think I should base it on?
I think the 3.46 one can be diverted to 3.15 as well, 4.x seemed to be almost the same code as 3.46 in the beginning. Not sure when they changed to the current style.
Tell me the last version you were comfortable generating.

Good morning?

Good morning! Is it good night over there? Do I wake up right as you're going to go to sleep?

https://huggingface.co/spaces/John6666/blitz_diffusion4

It seems the style broke, you can compare it to this backup: https://huggingface.co/spaces/phenixrhyder/NSFW-ToyWorld - in the backup the button lights up when you hover over it, on your new version it gets dimmer.

I was under the impression that TestGen would work with the 3.15 code.

The main difference is TestGen uses gr.load to load models:

gr load

Old Gradio 3.15 versions use gr_Interface_load, which doesn't work for them, so a Gradio 3.15 version that could use gr.load would fix them and they could use CSS changes directly without requiring a theme.

However, as far as I can see, except for the UI, it would be better to move to 3.46.

The reason I wanted them to fix the problem with 3.15 was that people rebooting their spaces using it and duplicating them will find they no longer work, and if they update their spaces to the last version of Gradio 4, they won't work either (because they're using gr_Interface_load), they would need to upgrade to Gradio 3.46 or change to gr.load in their code, but they don't know, and there's nothing we can do about that. But I've learned to live with the idea that we can only save our spaces.

which version do you think I should base it on?

I'm fine letting 3.15 die, I have no use for it because with the themes you've created we can do away with it. And I'd be fine letting Gradio 3.46 die and move everything to Gradio 4.x if you could find a way to make the queues work as in 3.46, whenever I use spaces/Yntec/PrintingPress and find I'm 7th in the queue for no reason, it doesn't feel right and want to move it back to Gradio 3.46.

Tell me the last version you were comfortable generating.

3.46, or maybe even a later version that allowed you to gen many images simultaneously, I never tested later versions, I'll check what's the highest Gradio version that works today, I don't know when that broke, it'd be funny if there's an early Gradio 4.x version without hidden queues and I could move to that already.

Thanks. I'll check them tomorrow.😴
Even problems and ideal results are clear, I can manage them somehow.
I maybe could find solutions. Because thanks to Gradio, I just have had to do so for months.πŸ₯Ά

Thanks, see you tomorrow!

Gradio 3.48.0 seems to be the highest Gradio version that works, maybe you can base it on that one, here's a space I created with it: https://huggingface.co/spaces/Yntec/MiniToyWorld

By the way, did you try this?

No queues

Maybe this would allow Gradio 4.x to work like 3.46?

Do I wake up right as you're going to go to sleep?

Come to think of it, I've never calculated the exact time difference. That seems to be about right.
I usually go to bed early and get up early, rather early for a modern person.

Time in Mexico City vs Tokyo
19:05 Sunday, September 8, 2024 = 10:05 Monday, September 9, 2024
Tokyo, Japan time is 15:00 hours ahead of Mexico City, Mexico
https://24timezones.com/difference/mexico_city/tokyo

I have released a 1.01 version that is much like the original. The priority was to make it look like the original.
https://huggingface.co/spaces/John6666/blitz_diffusion4
https://huggingface.co/spaces/John6666/YntecDark

By the way, did you try this?

No, I haven't tried.
I use Queue=False itself a lot in even 4.x, but I had a habit of passing heavy processing to Queue when using Zero GPU space.
So that's how it is. If I don't use Queue from the beginning, I don't have to worry about Queue!
I'll try to reflect that in HFD.

Gradio 3.48.0 seems to be the highest Gradio version that works,

A Gradio researcher would compare 3.49.0 and 3.48.0 here, but I'm not, and it's a pain in the ass, so I won't.

And I'd be fine letting Gradio 3.46 die and move everything to Gradio 4.x if you could find a way to make the queues work as in 3.46

It seems likely that compatibility will continue to be lost further, but never recovered, and it would be better to do so if we could.

If this were just our T2I space (since we are establishing our porting methods), the story would be relatively easy, but there are compatibility issues in audio-related areas as well. They are relying on libraries from 2020 to early 2023, and they are using Gradio 3.x and so on.
For vulnerability avoidance and space maintenance across HF, it would be more realistic for HF itself to take the lead and maintain the libraries and Gradio.
Except for Gradio, though, all HF needs to do is mostly just go around updating the github requirements.txt for each library.

When I was modding HFD, I heard that Queue is required for stop button related processing, and if I don't turn it on, it crashes with an error...
The image generation part is also chained to the stop button, so if I turn Queue off, it crashes...
I've set the concurrency_limit to None.
https://huggingface.co/spaces/John6666/Diffusion80XX4sg

If we give up the stop button, maybe Queue can be turned off.
Maybe the stop button is the culprit that forced the Queue...

This comment has been hidden

I usually go to bed early and get up early, rather early for a modern person.

Since I don't have to work, I'm usually like this:

image.png

But with 26 hours days, so I'll eventually catch up to you and we'll be waking up at the same time.

The priority was to make it look like the original.

Alright, thanks, I have copied it to space/Yntec/YntecDarkTheme and upgraded, I've also resurrected a copy of the original UI: https://huggingface.co/spaces/Yntec/ToyWorldGradio3 It seems the only differences remaining are the gradient of the "Your prompt" box and the way the Generate button looks like, in the original it lights up when you hover over it and reverses gradient when you are clicking it, from this code:

Button code

A Gradio researcher would compare 3.49.0

Oh, does that exist? I was going based on this list: https://www.gradio.app/changelog which doesn't mention a version higher than 3.48.0

If we give up the stop button, maybe Queue can be turned off.

Sure! I never used that button anyway, I guess it's for people that are tired of waiting and just want to cancel and try another model, I never could live with the idea that the image was generated but never shown, so I always wait until it's there (which may take up to 600 seconds) or it timeouts.

But with 26 hours days,

Wow. No wonder the time difference is changing mysteriously.

By the way, I belong to a private small business, but even though I am bound for long hours, I have free time as long as others are not in trouble. People often wonder why Japan's labor productivity is so low and the economy is not growing, but I guess it is because there is no custom to pay wages according to work, simply pay according to hours, no courage or mechanism to demand higher wages, and the only hours of detention are 100% 9:00-17:00 plus a little over except for a very few in all companies.
Not as many people as me have free time during the work day. Maybe that's why smartphone games are so popular.
While things are improving in some of the larger mid-size and large companies, for most people in small companies, the immediate priority is to maintain their jobs or their bottom line, and the government has been doing that as well. The large companies depend on the medium-sized companies under their control for their actual operations, and the medium-sized companies depend on smaller companies for their operations. It is a multiple subcontractor structure. This in itself may not be a bad thing, but this emphasis on jobs over tasks is, in essence, socialism. As long as there is no mechanism to automatically eliminate waste, productivity will not increase and the economy will not grow.

What I'm trying to say is, don't think that just because you have a job, your actual work is full!
No, I'm grateful to have a job. Well, like you, my lifestyle would be better suited to the AI hobby. I'm bored whether I want to be or not!

Oh, does that exist?

Yes. At least there is 3.50.0 in github version.

the gradient of the "Your prompt" box and the way the Generate button looks like,

That's workably simple. I simply mistook it for a teal color.

in the original it lights up when you hover over it and reverses gradient when you are clicking it,

I thought I fixed it in 1.01... why isn't it working?
By the way, brighten() doesn't work with the Gradio theme, so I've manually brightened it. Thank you Excel.

Sure! I never used that button anyway,

I'll try to make a version like that. And if the behavior doesn't change, just put it back. If it doesn't do anything good, there is no need to remove the stop button.
Please verify the behavior.

As long as there is no mechanism to automatically eliminate waste, productivity will not increase and the economy will not grow.

Around here our president went crazy with the pensions, first he enabled pensions for everybody, are you that old? You have access to that money, you didn't need to save for it, people that saved for pensions had the new ones added to what they had. Then he decreased the age required to access them, so younger people could have access. Then he decreased the age required to access them for women, so younger women could stop working earlier. Recently he has been giving similar pensions to children...

It turns our our previous governments were so corrupt that just stopping the corruption created a bunch of money just because it wasn't being stolen anymore, and it could be given to the people, and it's difficult to complain when there's abundance of it and I can hobby 24/7, the hobby I spend the most time in, by far, is sleeping. Problems that have appeared are inflation, because all this money flowing means the money doesn't buy as much, and that the products you wanted to buy are sold off and you have to wait for a restock. I used to buy a lot of different kinds of meat and store them in the freezer and eat them bit by bit, this was possible because most people didn't have the money to buy the pricey ones, so they were available. Now they just buy it all so when you arrive there's the meat people didn't buy, I guess that means more people have access to meat of better quality, but the change was noticeable for me.

Yes. At least there is 3.50.0 in github version.

Oh, they forgot to announce it, testing it now...

Please verify the behavior.

Sure, it'll be easy, without queues the 6 images would all appear after 10 seconds instead of having to wait a minute (now I sound like Scrooge McDuck doing all this to save 50 seconds!)

No wonder the time difference is changing mysteriously.

Yeah, yesterday at this time I was sleepy so it was game over for me working on anything, today I have 2 more hours of battery, tomorrow they should be 4.

Anyway, I stand corrected, Gradio 3.50.0 seems to work perfectly as well: https://huggingface.co/spaces/Yntec/MiniToyWorld I guess I'm officially a Gradio researcher now!

Around here our president went crazy with the pensions,

Eh, just from what you write here, it looks like the life of the common people in Mexico is improving a lot except for inflation. (Inflation is painful, but if it is due to increased income of the common people, it will lead to economic growth in the medium to long term and make things easier in the future.)
Isn't that the de facto basic income route?
It is difficult to say this, but the stereotype of Mexico in Japan is tacos, Mexican hats, mafia, and Breaking Bad Tuco, which is surprising.
I don't think I've ever seen such improvements in the Japanese news, not even once in the international section...

Sure, it'll be easy

I commented out four lines so it's complete!
https://huggingface.co/spaces/John6666/hfd_test_nostopbutton

I don't think I've ever seen such improvements in the Japanese news, not even once in the international section...

This is what you'd find on the news: https://www.nbcnews.com/news/latino/mexico-poorest-less-funds-lopez-obrador-universal-pension-rcna153956

I don't think I've ever seen a mexican president being praised for any decision, it has become traditional to complain about the government and to focus on the bad, I don't really know much about economy so I can't comment about the poorest apparently getting a smaller piece of the cake.

I commented out four lines so it's complete!
https://huggingface.co/spaces/John6666/hfd_test_nostopbutton

Oh boy! It works! The hidden queues are gone! Aren't we done? With that code I can finally let Gradio 3.x rest in peace! The 6 images appear simultaneously and if they do that it's so fast that the gallery below may only show 4 of them! Ha! πŸ˜‚

Oh boy! It works! The hidden queues are gone!

Great!😸

Let's ditch the stop button. And then we can adjust the UI and we're done. I have some business to attend to today, so it will come later, but I think we've found the way!

traditional to complain about the government and to focus on the bad,

Well, countries where journalism no longer does that are slowly becoming North Korea. Perhaps we experienced that before World War II and for a little over a decade, and after COVID-19, when Russia and Ukraine started war and at the same time the yen weakened and prices skyrocketed in the aftermath of a decade of massive monetary easing, the Japanese media became poorer and the previous government clampdown on media was perhaps unrelated, the situation deteriorated further.
Journalism and democracy are a kind of safeguard to somehow automatically avoid the worst errors by dragging their feet together. It is a specification that there will be a lot of bad language.

From what I've read in the news, it seems the real problem is not how the cake is divided, but that the government is eating too much cake without enough savings. Maybe they could share the cake with the poor and that would actually be cheaper, so they simply skipped it. The political influence of the poor is weak, so they are often ignored, and even in our country the poor are starving. (We throw food away, yet 30% of our kids are calorie deprived.)

In summary, this approach should make us happy now, but it might be a worrisome approach in the future. If Mexico's national budget has the ability to make money before it runs out, happiness will continue long after this.
But what is certain is that some people will be happy this way. There are worst-case options for everyone, but this is not one of them. That is not even fully denied by the news article above.
The rest is a matter of individual political choice, not mine to say.

Well done! For the first time in history https://huggingface.co/spaces/Yntec/ToyWorld is on Gradio 4 and Gradio 3.x has officially become obsolete! πŸŽ‰πŸ₯³πŸŽ†πŸŽŠπŸŽˆπŸͺ©

I adopted the strategy of seeing what was the fewest amount of code I could change and still make it work, it was very scary to never see "Processing" appear 🀯, but the images eventually do, phew!

it was very scary to never see "Processing" appear

That's a bug, queued or not, it's supposed to be "Processing" on the item specified as the output destination.
That's why we sometimes specify an output destination just for the sake of appearances, when in fact it returns nothing. Let's try that later.

Journalism and democracy are a kind of safeguard to somehow automatically avoid the worst errors by dragging their feet together. It is a specification that there will be a lot of bad language.

Since Porfirio Diaz we never had true democracy in Mexico, everything was a sham and who governed was always pre-arranged, people with power had the resources to ensure they remained in power. AMLO was our first democratic president, and it turned out to achieve that we just needed for people to stop getting bought, they fixed the elections by buying votes from people and by buying the people that counted the votes, when that diminished enough and AMLO got enough votes we had our first president elected by people. And once we had democracy, our very next president is going to be our first woman as president, and it was funny how the opposition found there just wasn't any man that could stand against her, so they sent another woman, and still did their best to fix the election, but Sheinbaum still won by a landslide. Let's hope she's not a traitor, she has promised to follow with the current plans but you never know what people do when they're in power...

30% of our kids are calorie deprived

Around here most of our kids are fat, to fight that we started putting these black octagons in products that would make people fat:

Etiquetado en mexico

And THEN we made it ILLEGAL to have a mascot in such products! The death of mascots on products is something I never saw coming. The idea was to make versions of products without those problems and put mascots in them, so kids would switch to healthier products, but the truth is the healthy versions taste awful. I'm not a fan at all of these but apparently they are working, people just needed a reminder to buy and eat different things.

If Mexico's national budget has the ability to make money before it runs out, happiness will continue long after this.

Oh yes, they say we have more than enough for all this, it's just that it's being eaten by the politicians' absurd salaries, and there's great resistance to fix it by them because of greed, there's like a political civil war around here (a plan that would make a lot of wealthy people stop winning so much money) and I guess the actions of the next president will decide if they share more of the cake with the needed.

Let's try that later.

Alright! At least it's the least harmful bug I've seen. That bug and the Stop button feel basically like the price was free! And I'm sleepy now, so see you tomorrow!

Since Porfirio Diaz we never had true democracy in Mexico,

The Latin American political style that I have learned seems to be generally the same as it was at that point in time. It would be strange to say that I envy, in fact I am envious, that the people have begun to fight, regardless of whether the president is populism incarnate or not.

It sounds good to say that my country has evolved, but it has created a lot of strings attached, is complicated, has a government debt that is unlikely to be repaid in 100 years, and has no easy solutions, except those offered by con artists and religious groups, to think about.
While politics was still active in my parents' generation, the current generation has all but given up, and whether it is because of this or not, most people have given up and stopped voting, with turnout as high as 30%.
This creates a negative spiral in which the influence of the organized vote and the votes of religious groups becomes several times greater. The fact that Shinzo Abe was supported by the Soka Gakkai, the Unification Church and Council of Japan was shot dead by a Unification Church fraud victim as a result is symbolic. Incidentally, the victim himself was originally a supporter of Shinzo Abe.

Let's hope she's not a traitor,

I hope so. Because one mistake can be fatal. This isn't a game, there are no continues or saves.

CSS borders are not reflected no matter how I try. Perhaps it is implicitly overwritten somewhere. Well, it's Gradio and if I think too much about it, I'll lose. Is this the right background for the text box?
https://huggingface.co/spaces/John6666/YntecDark
And new version of pp.
https://huggingface.co/spaces/John6666/PrintingPress4

As for the Dark theme, well, there were only minor improvements left, so I started working on the Light theme, but where did the original purple space go?
The original design of ToyWorld is no longer purple, because when I copied and pasted the logic part, it seems that I changed the color to Blitz as a collateral damage.
Is there any purple space left somewhere?
https://huggingface.co/spaces/John6666/YntecLight

Is there any purple space left somewhere?

Oh yes, it's right here!: https://huggingface.co/spaces/Yntec/ToyWorldXL

Thanks for all your work, I've been really busy watching TV with the family, haven't been able to look at your code changes yet, my current plans are to create a MiniPrintingPress where I can test the queue changes (this will take longer as I have to make the graphic for its logo, lol), properly release the MemojiRemix model, implement the queue changes into the PrintingPress, make the changes of the DarkTheme and reboot blitz diffusion so it's used! Whew!

Hobbies are best done at a leisurely pace.

Oh yes, it's right here!: https://huggingface.co/spaces/Yntec/ToyWorldXL

Okay, now light theme is easy to make. Now all I have to do is to play with Excel.

But there's more left to do than I thought...
I've got a lot of work to do over here.

Haha, the first time I made the Printing Press logo I spent hours on it, this time around I just used https://huggingface.co/spaces/FilipeR/FLUX.1-dev-UI - BAM, done in less than a minute, with the prompt:

This image is a digital graphic featuring the words "Printing Press" prominently displayed in a cursive, golden text style. The text is centrally placed against a rich, dark teal background adorned with sparkly, golden glitter particles, giving the background a festive, celebratory feel. Above the text, in smaller letters it says "Mini". The glitter particles are scattered unevenly across the background, adding a sense of depth and texture. The golden text stands out vividly against the dark teal backdrop, making it the focal point of the image. The font used for "Printing Press" is elegant and flowing, with a slightly whimsical and playful touch. The text for "Mini" is similar but bolder. The overall style of the image is modern and festive, suitable for a range of themes, such as holiday decorations or celebratory events. Keeping the focus solely on the text and glittery background.

Unfortunately, I set the dimensions wrong creating a vertical banner instead of a horizontal one, ending with a completely useless graphic!

Printing Press

After learning to do vector graphics and spending 25 years making similar logos, it's incredible now to throw this one to the waste because it's just faster to generate another one properly. It's also the very first time I generate an image for something other than using it as toys, or as samples of the models I upload, one to use on a space as its title, the practical use of image generators...

Haha, the first time I made the Printing Press logo I spent hours on it, this time around I just used https://huggingface.co/spaces/FilipeR/FLUX.1-dev-UI - BAM, done in less than a minute, with the prompt:

Wow, this is AI generated? Ai is already one step away from taking away human work in design. Now all we need is multilingual support.

After learning to do vector graphics and spending 25 years making similar logos, it's incredible now to throw this one to the waste because it's just faster to generate another one properly.

Logo or rather graphical is important. If it is graphical, people are less likely to run away because it looks easy!
It's more fun to play with the logic part, and myself and probably the majority of HF are happy with that, so it's hard to get into the design, though... it's worth a thought to let Flux take care of it.

It's also the very first time I generate an image for something other than using it as toys, or as samples of the models I upload, one to use on a space as its title, the practical use of image generators...

I'm so busy collecting and making them available that I don't have time for practical use, too!
Perhaps HF should make more of an effort to get people to just use it.
Development efficiency can change with or without feedback. If there's too much, people usually get tired and quit. The OSS community is usually like that.

Wow, this is AI generated?

Yeah, I can use this workflow because I already had a starting logo that I made, I put that image into JoyCaption here:

https://huggingface.co/spaces/fancyfeast/joy-caption-pre-alpha

That's what gives me a prompt I can use with Flux, I saw what it produced and then I made changes to it until it made a logo I was happy with. I suppose people could just plagiarize an existing logo and use it as starting point. I suppose the problem is all the useless graphics that one produces that go to waste. Since I'm going to delete them anyway I guess I'll post them here, I had to keep decreasing the height of them until I got the final version:

Mini Printing Press

Mini Printing Press

Mini Printing Press

Mini Printing Press

My museum of failed attempts, though I guess getting it in 4 is acceptable. Failed attempts had never looked this good, indeed. The final prompt was:

This image is a digital graphic featuring the words "Printing Press" prominently displayed in a cursive, golden text style. The text is centrally placed against a rich, galaxy adorned with sparkly, golden glitter particles, giving the background a festive, celebratory feel. Above the text, in smaller letters it says "Mini". The glitter particles are scattered unevenly across the background, with galaxies and stars adding a sense of depth and texture. The golden text stands out vividly against a very detailed teal galaxy with planets of different sizes in the middle behind the text, making it the focal point of the image. The font used for "Printing Press" is elegant and flowing, with a slightly whimsical and playful touch with sparkles on some of the letters. The text for "Mini" is similar but bolder. The overall style of the image is modern and festive, suitable for a range of themes, such as holiday decorations or celebratory events. Keeping the focus solely on the text and glittery background.

Ai is already one step away from taking away human work in design

I think my abilities were still ahead of Flux until I added planets to the prompt, it was at that point that you'd rather ask it for a logo than waiting hours for my version, because I could never make it look that good, what would I do? Photoshop existing planets there?

It's almost done.

You are quite fast! You can count the things I've done today with one hand and 4 fingers to spare!

Yeah, I can use this workflow because I already had a starting logo that I made, I put that image into JoyCaption here:
https://huggingface.co/spaces/fancyfeast/joy-caption-pre-alpha

I see, you processed the original logo, so there won't be any composition problems.
So right now, to make it from scratch with AI alone, I still need to write a draft, clean it up with i2i, and then turn it into text with i2t...and so on.
If Flux were a little lighter, I could just do the doodle in Flux with i2i with text and it would be done.

JoyCaption has been amazing since it first appeared, a combination of VLM and LLM. The reason why I don't touch JoyCaption is because I am waiting for it to evolve. I'm still waiting to see how it goes since it seems to be pre-alpha. And for my main animation use, WD Tagger is fine.

You are quite fast!

If I could use brightness(), I wouldn't need Excel and it would actually be faster, but I thought it would be faster to adapt to Gradio than to figure out how to trick Gradio.

By the way, it's only just noon here. Is there any remaining programming I should be doing at the moment?

So right now, to make it from scratch with AI alone, I still need to write a draft, clean it up with i2i, and then turn it into text with i2t...and so on.

The secret is plagiarism! Forget about doing it from scratch and with AI alone, just search for an image of what you want in Google, put it through JoyCaption, modify the prompt for your needs, and make Flux draw the prompt:

Big Printing Press Bottle

This is a vibrant photograph capturing a lively urban scene. Dominating the center of the image is a massive, glass-encased replica of a Printing Press bottle, approximately 10 feet tall, with the iconic red Printing Press logo prominently displayed on its side. It says "Printing" at the top of the label and "Press" below it on white cursive letters horizontally on a red background, the bottle has its base resting on the ground and its neck pointing upward, it is filled with black liquid. Surrounding the bottle are several tall, slender palm trees with lush green fronds, adding a tropical feel to the urban environment. The background features a modern shopping mall with large, reflective glass windows, reflecting the blue sky above. The mall's facade is a mix of white and blue, with a prominent sign for "HuggingFace" visible on the right side. Below the bottle and the mall, several vehicles are parked, including a white SUV, adding to the bustling city atmosphere. The sky is clear with a few clouds, suggesting a bright, sunny day. The photograph captures the juxtaposition of natural and urban elements, creating a visually striking and lively scene.

We're at the point AI can now basically draw any picture you think of.

The reason why I don't touch JoyCaption is because I am waiting for it to evolve. I'm still waiting to see how it goes since it seems to be pre-alpha.

It's good enough for these purposes, I rely on it because I just can't describe images with this level of detail to prompt Flux, but the details that matter are going to be added by modifying the prompt anyway.

So it's done! This was the testing space: https://huggingface.co/spaces/Yntec/MiniPrintingPress

And this is the Printing Press running without queues! Yay! https://huggingface.co/spaces/Yntec/PrintingPress

The biggest problems implementing these were runtime error because of... identation problems? Really now? Sometimes I was missing three spaces on a line, sometimes I had three extra spaces on a line, putting the right amount of spaces worked.

I have yet to test all the changes with the extra features like seeds to add to my spaces, meanwhile I've added a link to your advanced version at the top so people can use them.

Is there any remaining programming I should be doing at the moment?

Not that I know of, we've crossed all the checkboxes! I'm going to check the styles now, I no longer remember why I prioritized things like this instead of checking them first, but here I go!

Is this the right background for the text box?

Yes, it works!

So I guess what remains is that when one hovers over the Generate button it should look like this:

Hover button right

But it looks like this:

Hover button wrong

But it looks like this:

I had calculated in RGB when I should have calculated in HSV. The Cancel button cannot be brightened (V was originally 255), so it was made darker. It doesn't show up anyway, so it's fine.
https://huggingface.co/spaces/John6666/YntecLight
https://huggingface.co/spaces/John6666/YntecDark

, just search for an image of what you want in Google, put it through JoyCaption,

I did the same thing with WD Tagger when I first started playing with it. Come to think of it.

But this is a technology that was not possible with only prompts two months ago...progress is rapid.
Even if we could have managed the Prinring Press letters nearby, the Hugging Face letters on the building would have been impossible in SD1.5 or SDXL.

If you put a T5 encoder on an SD1.5 unet, you might be able to do it, but the unet would be 1.5GB and the TE would be 10GB, so it would be too pointless overall.
Perhaps a TE with a bit more focused and shapelier functionality would be meaningful, and maybe it already is. Now we just have to wait for a de facto candidate for the low end.

YntecDark

Yes! That was it, great job! Gotta love the Synchronize button of clones spaces. Today's plans include testing the Light version, updating ToyWorldXL with it and experimenting with borders, I have more ideas but I'm not as optimistic and think they'll have to wait for tomorrow.

But this is a technology that was not possible with only prompts two months ago...progress is rapid.

Yeah, originally I wasn't very impressed with Flux. What is this? I can tell it where things would go? The cat on the right and the dog on the left? But I never cared about where they were. It can draw text? But I already downloaded all the fonts for that purpose and I can insert any text into any picture already. My own merges were outperforming it creatively, send some random prompt to IffyMix and it'll easily outperform Flux with a better composition and a style that Flux could never. It was when I started [family time - this message will be continued later]

[Message continued] using Joy Caption when I realized Flux's potential, what I can just get a comic panel from Google, put it through there, modify the prompt and get this?

Flux comic panel

This image is a digital drawing in a comic book style, featuring two anthropomorphic characters. The background is a gradient of warm colors, transitioning from yellow at the top to orange at the bottom, giving a sense of urgency or excitement. On the left, there is a cat character with yellow fur, an orange baseball cap, and a blue vest. The cat has wide, alarmed eyes and a panicked expression, indicating distress. To the right, there is a red panda character with big eyes, a black nose, and a brown bow tie. The red panda is wearing a brown suit jacket with a white shirt and a blue tie, and has a stern, authoritative expression. The red panda is sitting in a chair, while the cat is standing behind the panda. The cat is speaking in a speech bubble, saying, "Why is this comic so boring??" The red panda replies, "That's not the point!" The word "BOO!" is prominently displayed in teal, bold letters, indicating a sound effect. The overall mood of the image is humorous and comical, with the characters' expressions.

[message to be continued again]

[message continued] that's mind boggling! Look at the characters's expressions, they have soul! They aren't posing to the camera, it legitimately looks like a comic panel that is part of a larger story. If I wasn't maintaining models or spaces I'd just be making and uploading these to funny junk, I have a room full of notebooks with comics I've never published. Because, they look bad. What we're missing is character permanence, you won't see these characters ever again because you'll get different ones, but for one-offs it's excellent.

It's just missing more styles and artists reference, and well, characters, but maybe in six months image generation will have been solved, models could just draw in the style of old models, they would only be used to create reference pictures for the new ones.

Now we just have to wait for a de facto candidate for the low end.

People on the low end have managed to run versions of Flux that have been compressed, but, yeah, probably someone else can beat Flux with some 8B parameters model that doesn't have all these issues, Flux is a lot of bloat that is really doing nothing in it and someone could train a smarter model, we don't need a bigger one.

they have soul!

That's what making a text encoder more powerful means. It increases the ability to understand the language!
Prompts are words, but text encoders and tokenizers and UNETs are the linguistic and visual areas of the brain. They are so closely related to each other that you can't just duct-tape them together... (Actually, with Diffusers and ComfyUI, chimeras can be made quickly, but whether they work well enough is another matter).
And this can probably be done with smaller UNETs, as long as the TE is powerful enough, although the quality will be a little worse.
In other words, even a slightly larger SD1.5 or SDXL class should be able to go, and with that, the use case should be wide. And maybe someday it will come out.

some 8B parameters model that doesn't have all these issues,

I've been experimenting with LLM in my space, and while the 4B model rarely succeeds in generating a Japanese-English-Japanese translation plus a story, a good 8B model can go far enough, even if it requires related knowledge. The 4B model can be used for grammar only.
If it is for TE only, it should be possible to cut down more functions and tune it smaller. (Not for me, though.)
More than 90% of people will be satisfied if this thing is built in.

By the way, I got the lamentable information that JoyCaption is not fully NSFW compliant, so I decided to modify it myself.
For now, I've added a function that allows you to swap models, but if there is any function that would be useful, please send me a request.
However, I'm having trouble finding enough slots for the Zero GPU space, so I'll probably integrate it into my space somewhere else soon, so I can't do features that eat up too much space.
https://huggingface.co/spaces/John6666/joy-caption-pre-alpha-mod

https://huggingface.co/spaces/John6666/joy-caption-pre-alpha-mod

Oh, that's very nice, I have no idea how any of that caption technology works, is it possible to use whatever this guy is using? https://huggingface.co/spaces/Quardo/gpt-4o-mini - I think it provides the most detailed descriptions I've seen, they are novel sized descriptions, maybe it's not compatible.

https://huggingface.co/spaces/Quardo/gpt-4o-mini

To abbreviate the details, JoyCaption is a combination of VLM, LLM and adapter (virtually the only unique part and the main body). It is the LLM of the three, and although it also has VLM functions, JoyCaption can take care of those functions.

It might be possible if the author, let alone me, could make an adapter, but right now it's virtually impossible.
What I am trying to say is that GPT4o is too big!
JoyCaption now uses the 8B Llama model, which means 8 billion parameters, and GPT4o has 100 trillion parameters, and we have to load it into VRAM.πŸ™€
However, if embedding becomes available in the Inference API in the future, it may be realistic.

Wow! 8 billion against 100 trillion? I couldn't have guessed because they don't seem that far apart in outputs, actually, I was using JoyCaption instead of GPT4o because it seemed more concise, GPT4o is way too verbose and mentions irrelevant things, usually Flux omits things from the image because it has already forgotten about them by the time it's processing so much words.

In other words, even a slightly larger SD1.5 or SDXL class should be able to go, and with that, the use case should be wide. And maybe someday it will come out.

Did you know about SDXL's finetuner? So, what happened is that SD1.5 was trained with 512x512 images, so what they did was shrinking down images of larger resolutions, which made it add details to pictures as if you generated a much larger one and shrunk it down. For SDXL they trained on higher resolutions so they mostly didn't do this as there was bucketing and newer approaches.

The result was that it could never be at the level of detail of SD1.5, so instead they implemented the "refiner", a specialized model to add detail to the picture, an user was expected to generate most of the image with SDXL and then switch models so the refiner would add detail to that picture. Nobody supported that and the refiner was never trained, so all the SDXL based models and finetunes and then Pony based models never reached that level of detail.

But I think this refiner technology is genius! We should be able to use Flux to create the image composition, so it knows where things are as requested by the prompt, then, the latent space is sent to a SD1.5 model, to add the style you want that is missing in Flux, continue adding to it until the picture is almost done, then it's sent back to Flux to add back any anatomy coherence and fix the text and deliver a final picture!

With such a workflow I don't think we'd need versions of SD1.5 with a better text encoder, we just use the models that exist to make characters and styles and what Flux can't do and if Flux is so good at converting random noise into gorgeous pictures imagine what it could do with an almost finished picture!

We already can do that with image to image but everything is messed up because it gets VAE decoded 2 more times than necessary, a 3-step refiner that is Flux->SD1.5->Flux could solve it all without needing more models.

Anyway, I finally have seeds in these spaces!

https://huggingface.co/spaces/Yntec/MiniPrintingPress

https://huggingface.co/spaces/Yntec/PrintingPress

I was never convinced by the "Seed -1 uses a random seed" solution because you never get to see what seed was used, it's a deal or no deal thing so I have preserved the old functionality in the original page and made a new tab for seeds that uses seed 1 as the default, so you always know from what seed your image is coming from, I guess that'll be the reason I'll stop generating images with random ones, welcome seeds!

By the way, when generating there's this animation appearing in the image box:

Huggingface loading

It's two rectangles bouncing around. From where does it come from, and, is there a way to change it? Is there a way to make it show a gif instead? If I could do that, I could solve a lot of problems.

Did you know about SDXL's finetuner?

I had no idea about SDXL's Refiner related stuff because it was already treated as a relic of the past when I joined...
That was probably a good idea for the model structure, but I think it was fatal because it was not in line with the ecosystem of the people who actually train the models. Most of them are busy training a single UNET, and that is the easiest common language to use.

I heard that even in SDXL, ADetailer and Upsampler are usually used by people who actually generate and upload images, but I don't know because I don't do that much.

I still see people working on CFG (guidance scale) and pipeline structure improvements within HF, and it seems to be thriving at ComfyUI. But it's not supported by the Serverless Inference API...

I was never convinced by the "Seed -1 uses a random seed" solution because you never get to see what seed was used,

I can also implement the return of Seeds that have actually been used. The logic part is simple. But it seems like a pain to rewrite the space app.py, are you okay with that? Also, I need to figure out how to handle the random noise that is originally added. Right now, if the Seed is -1, I add random noise, otherwise I just specify Seed without noise.
Eliminate random noise and design the Seed to be randomized and return the Seed that was used? But where would we return it to? Filename? Writing metadata?

From where does it come from,

Gradio. Because it's the Gradio logo! I'll do some research to see if I can change it. You can't accurately display your progress. Because only the server knows the progress, but the server doesn't tell you that.

Oh, there was another way to do it for Seed: just add a shuffle button for Seed, which can be done in a few lines.

Also, maybe there could be presets for Width and Height, like SD1.5 Square, SDXL Wide, or something like that. We may have reached the stage where usability can be considered. This, too, would only require a dictionary + a few lines of code to actually create.

And about the Gradio logo, it was never supposed to be in the manual. I wanted to punch myself in the face for looking for it just in case.
So I'll read github.
https://github.com/gradio-app/gradio/tree/main/gradio/themes

https://github.com/gradio-app/gradio/tree/main/js/icons/src
I followed it with the F12 key and searched github. I found an element that looks like it, but it is impossible to specify it in the theme. There is no item even in the source code.
BTW, there were a few elements that were not in the manual or existing themes, but not enough to make it worthwhile to specify them.

If I want to specify it with CSS, I may have to specify the svelte's class and override it.
It was subordinate to the progress_text class, but it probably won't work with specifying that class.
Mr. multimodalart put a JSON component, wrote CSS and Python, and made his own progress bar.
I guess it's not possible to do it in an ordinary way, since a HF expert (and a HF staff member) and a veteran like him did it...?
Maybe we should change our mindset. Maybe write a note, or place some other element like that.

I added writing metadata to the image, a button to randomize and randomizing Seed, and I eliminated the noise algorithm.
I'll write about the detailed changes later, but just a quick test to see if writing metadata doesn't corrupt the image. I'm sure it'll be fine.
By the way, if you open the file in a binary/text editor, you should be able to see the metadata.
https://huggingface.co/spaces/John6666/blitz_diffusion4
https://huggingface.co/spaces/John6666/Diffusion80XX4sg

ChangeLog

  • replace externalmod.py
  • modify app.py

Blitz

from externalmod import gr_Interface_load, save_image, randomize_seed
~
async def infer(model_index, prompt, nprompt="", height=0, width=0, steps=0, cfg=0, seed=-1, timeout=inference_timeout): # whole
~
def gen_fn(model_index, prompt, nprompt="", height=0, width=0, steps=0, cfg=0, seed=-1): # whole
~
seed_rand = gr.Button("Randomize Seed 🎲", size="sm", variant="secondary")
~
seed_rand.click(randomize_seed, None, [seed], queue=False)

HFD

from externalmod import gr_Interface_load, save_image, randomize_seed
~
async def infer(model_str, prompt, nprompt="", height=0, width=0, steps=0, cfg=0, seed=-1, timeout=inference_timeout): # whole
~
def gen_fn(model_str, prompt, nprompt="", height=0, width=0, steps=0, cfg=0, seed=-1): # whole
~
                        seed_rand = gr.Button("Randomize Seed 🎲", size="sm", variant="secondary")
                        seed_rand.click(randomize_seed, None, [seed], queue=False)
~
                        seed_rand2 = gr.Button("Randomize Seed 🎲", size="sm", variant="secondary")
                        seed_rand2.click(randomize_seed, None, [seed2], queue=False)

Alright, I spent a whole hour trying to implement the new functionality while preserving the old functionality for HuggingfaceDiffusion...

an hour later

Apparently the code bases of HuggingfaceDiffusion and Diffusion80XX4sg have become so different that it's no easy task. And, no, I don't want to turn HuggingfaceDiffusion into Diffusion80XX4sg and then have to create another legacy space to preserve its old UI! I can't let old UIs die...

So I'll be back to the original plan of making small code changes one at a time and see at what point it breaks, I'll create a MiniHuggingfaceDiffusion space for this because it was killing to have to fetch all models and SEE IT BUILD FINE just to get an error when clicking the generate button... tomorrow. Unnecessary Queues get to live another day!

Apparently the code bases of HuggingfaceDiffusion and Diffusion80XX4sg have become so different that it's no easy task.

Eh... where did I fork...
I'll take a look at the code in a bit.

Eh... where did I fork...

I think it was when you implemented the PrintingPress as another tab of it.

Oh I see. Already at that stage. If the changes are in the name or title, I'll just do a quick port.

https://huggingface.co/spaces/Yntec/HuggingfaceDiffusion/discussions/8
This should work. Now we just need to make sure there are no leaks in the porting.

By the way, I'm sure you're already sleepy there due to the time difference, so you can reply tomorrow.
I'm wondering if the problem of the window size changing after generating an image and not being able to access the elements at the top also occurs there? Do you know of a workaround or something?
I've encountered this in rather a lot of places, not just our space, but HFD has a habit of doing this.

Okay, thanks, I've put red shapes over all the features from 80XXX4sg that leaked (I link to your version at the top if people want to use them):

Leaked features

I'll pause the space for now and try again tomorrow, even if it's just about clicking a merge button I have so few energy left for today that I'll use it to finish releasing LadyNostalgia.

I'm wondering if the problem of the window size changing after generating an image and not being able to access the elements at the top also occurs there? Do you know of a workaround or something?

Hmmm, I may have seen it, I haven't used it much because I like ToyWorld's showing of the 6 pictures better than the gallery where only one is shown and one has to click to see the others, the page will scroll up only so much and not allow access to the top, maybe it's a bug with the gallery, I didn't know if it was just me.

maybe it's a bug with the gallery

Indeed. There was always a gallery at the crime scene...
But it must be a useful component, which is the trouble. Well, tomorrow. Good night.

I'll create a MiniHuggingfaceDiffusion space for this because it was killing to have to fetch all models and SEE IT BUILD FINE just to get an error when clicking the generate button... tomorrow.

It's done! https://huggingface.co/spaces/Yntec/MiniHuggingfaceDiffusion This is ToyWorld with seeds... and only 9 models, so I just need to implement the randomize button and relaunch HFD tomorrow. I killed the gallery, die! Die, you! Bad, buggy gallery!

This is me moving...

In slow motion...

You can hear the air as I mooove...

Whuuuuww...

The gallery would be really useful for generating multiple images... if only it wasn't so buggy!😎
Also, be aware that the Image and Gallery data formats are slightly incompatible (Gallery requires tuple or list). If you find it difficult, please consult me first. It's not a big deal if you know what you're doing.

Also, it would be easier to maintain and improve frequently used extensions if they were included in externalmod.py or similar separate files.
I'd like to put only the UI in app.py, only the models in all_models.py, and only the programs in the others.
I thought about doing it, but the specifications are slightly different between the Blitz and Testgen systems, so I decided to hold off.

HFD is back online! https://huggingface.co/spaces/Yntec/HuggingfaceDiffusion - Testing space: https://huggingface.co/spaces/Yntec/MiniHuggingfaceDiffusion

Finally killed Seed=-1 and made the space start with a random seed along with implementing the random seed button. TODO: implement image metadata, implement in blitz_diffusion.

The gallery would be really useful for generating multiple images

It is useless to me because when an image appears I instantly save it so I don't need the old ones generated to appear in the app, I never click the old ones again so they just take space, and with seeds I can finally stop obsessing about losing images (I had lost them in the past because I didn't know what seed was used for them, so I could generate new ones but that one was gone.)

I'd like to put only the UI in app.py, only the models in all_models.py, and only the programs in the others.

I'm the other way around, I would want to keep it as simple and possible and in a single file, the reason I moved to all_models.py was because I failed to keep the model list in the app.py when changing the app. I also failed to implement magic prompts in the new UI, so that was the reason I was having different spaces for different things, lol! For instance, we're now having to declare MAX_SEED twice, once in the app, then on externalmod, if it was done only in the app we could do it just once, but I guess we just have different programming philosophies, I just forget later where something was and have to check every file to find it, it was my nightmare when I had my own Stockfish version, having to check search.cpp and then eval.cpp and then threads.cpp made me wonder what did they win by having a separate file for each thing, specially when I couldn't just update a variable from one in the other because "it wasn't declared here" or some other error.

I can't wait for natural language code to exist where I tell it what I want and it makes it work that way, instead of having to figure how "scale 1/scale 3" is used to change the width of buttons, I could tell it "make the slider 2/3rds of the width, and the button the remaining 1/3rd", and that's how the code would look like, imagine that!

TODO: implement image metadata,

It should be in there already, just open the PNG in notepad.

we're now having to declare MAX_SEED twice

Actually, this is how it's done normally, but since MAX_SEED is basically a magic number that doesn't change (similar to pi or something like that in math), I just cut corners.
Because that's not a parameter. It's a constant, not a variable, because it doesn't change, but there's no way to declare it in Python.

from externalmod.py import MAX_SEED

So, after you said that, I find hilarious that the first thing I did was change it from 2**32-1 to 3999999999 because I just couldn't wrap my head around the first one! πŸ˜‚πŸ˜…πŸ€£

Well, it's okay if it works, isn't it?
However, I think it's not good that there are Seeds that should be selectable but are not for the author's convenience.
If you leave the formula as it is, you only have to change two characters even if the Seed goes from the current 32-bit to 64-bit. (I don't know if that day ever comes.) The running speed could have changed 30+ years ago, but now it is pre-calculated, so it won't change a nanosecond.

Incidentally, even that formula of mine is actually lazy, and it would be more rigorous to do it this way. The result should be the same.

import numpy as np
MAX_SEED = np.iinfo(np.int32).max

https://huggingface.co/spaces/InstantX/InstantID/blob/main/app.py

should be selectable but are not for the author's convenience.

Well, part of the fun is designing these spaces, what people can do in them, as a sort of performance. 4294967295 is a number that is a technical limit, 3999999999 is a number I decided to put in there because it's the coolest number below it, though, of course, 3693693693 is even cooler, that'd be overdoing it. I made sure to let people know where they can use the extra 294967296 seeds if they really need them!

image.png

I bet they'd make the switch for negative prompts over the missing seeds, though! πŸ˜†

Then it's not a problem.

These ones now with a seed tab https://huggingface.co/spaces/Yntec/MiniToyWorld - https://huggingface.co/spaces/Yntec/ToyWorld - I can't tell you how much i appreciate that you brought us seeds!

https://huggingface.co/spaces/Yntec/blitz_diffusion/ finally up to date with blitz_diffusion4

I'm scared to look at the calendar and see how long it took me, so I won't!

The light theme is partly unadjusted, but seems fine?

It's not a complicated feature to break, but have you had any problems displaying images with metadata in it?
To be honest, I feared that Gradio might do something wrong.

With metadata, an image can be transformed from just an image into a dataset that can be used to accurately retrain the image model in the future.
Well, it doesn't have to be that big a deal, it can be a note of Seed and prompts.

brought us seeds!

Thanks also to Mr. multimodalart. He has been involved in most of the GUI-related production of HF's image-generating AI.
https://huggingface.co/posts/victor/964839563451127#66d1d7d46accd34f7500d78f

Sign up or log in to comment