Spaces:
Running
Space Broke
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/gradio/queueing.py", line 622, in process_events
response = await route_utils.call_process_api(
File "/usr/local/lib/python3.10/site-packages/gradio/route_utils.py", line 323, in call_process_api
output = await app.get_blocks().process_api(
File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 2016, in process_api
result = await self.call_function(
File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1569, in call_function
prediction = await anyio.to_thread.run_sync( # type: ignore
File "/usr/local/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2405, in run_sync_in_worker_thread
return await future
File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 914, in run
result = context.run(func, *args)
File "/usr/local/lib/python3.10/site-packages/gradio/utils.py", line 846, in wrapper
response = f(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/gradio/utils.py", line 846, in wrapper
response = f(*args, **kwargs)
File "/home/user/app/convert_url_to_diffusers_sdxl_gr.py", line 347, in convert_url_to_diffusers_repo
new_path = convert_url_to_diffusers_sdxl(dl_url, civitai_key, hf_token, is_upload_sf, half, vae, scheduler, lora_dict, False)
File "/home/user/app/convert_url_to_diffusers_sdxl_gr.py", line 286, in convert_url_to_diffusers_sdxl
pipe.scheduler = sconf[0].from_config(pipe.scheduler.config, **sconf[1])
AttributeError: 'NoneType' object has no attribute 'scheduler'
I've merged your commits and effectively rolled back. Now, this is the first time I've seen this bug even related to Gradio 5.
I'll have to debug it again. Hopefully it's a simple problem.
It wasn't Gradio5's fault. It was a false accusation!
I made a mistake in the branching process yesterday when I adapted the LoRA specs to the new PEFT.😭
Mistakes happen 😂!
Also are you able to add support for LyCORIS specifically LoHA to be merged into the checkpoint/diffusers format model?
hehehe.
LyCORIS specifically LoHA
PEFT originally had no option to distinguish between LoRA, LoHA, LOCON, LyCORIS and... anyway none of them.
I'm not sure if PEFT implicitly absorbs the difference or if there is essentially no difference in structure there.
The PEFT author wrote that the LoRA part of Diffusers depends on PEFT.
In my experience, at least for LyCORIS, I don't know what it is, but it is usable in Diffusers, so it is probably all usable as it is.
Got this while trying to convert a checkpoint
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/diffusers/loaders/single_file.py", line 495, in from_single_file
loaded_sub_model = load_single_file_sub_model(
File "/usr/local/lib/python3.10/site-packages/diffusers/loaders/single_file.py", line 168, in load_single_file_sub_model
raise SingleFileComponentError(
diffusers.loaders.single_file_utils.SingleFileComponentError: Failed to load CLIPTextModel. Weights for this component appear to be missing in the checkpoint.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/gradio/queueing.py", line 622, in process_events
response = await route_utils.call_process_api(
File "/usr/local/lib/python3.10/site-packages/gradio/route_utils.py", line 323, in call_process_api
output = await app.get_blocks().process_api(
File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 2016, in process_api
result = await self.call_function(
File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1569, in call_function
prediction = await anyio.to_thread.run_sync( # type: ignore
File "/usr/local/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2405, in run_sync_in_worker_thread
return await future
File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 914, in run
result = context.run(func, *args)
File "/usr/local/lib/python3.10/site-packages/gradio/utils.py", line 846, in wrapper
response = f(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/gradio/utils.py", line 846, in wrapper
response = f(*args, **kwargs)
File "/home/user/app/convert_url_to_diffusers_sdxl_gr.py", line 347, in convert_url_to_diffusers_repo
new_path = convert_url_to_diffusers_sdxl(dl_url, civitai_key, hf_token, is_upload_sf, half, vae, scheduler, lora_dict, False)
File "/home/user/app/convert_url_to_diffusers_sdxl_gr.py", line 265, in convert_url_to_diffusers_sdxl
pipe = StableDiffusionXLPipeline.from_single_file(new_file, use_safetensors=True, torch_dtype=torch.float16)
File "/usr/local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/diffusers/loaders/single_file.py", line 510, in from_single_file
raise SingleFileComponentError(
diffusers.loaders.single_file_utils.SingleFileComponentError: Failed to load CLIPTextModel. Weights for this component appear to be missing in the checkpoint.
Please load the component before passing it in as an argument to `from_single_file`.
text_encoder = CLIPTextModel.from_pretrained('...')
pipe = StableDiffusionXLPipeline.from_single_file(<checkpoint path>, text_encoder=text_encoder)
hehehe.
LyCORIS specifically LoHA
PEFT originally had no option to distinguish between LoRA, LoHA, LOCON, LyCORIS and... anyway none of them.
I'm not sure if PEFT implicitly absorbs the difference or if there is essentially no difference in structure there.
The PEFT author wrote that the LoRA part of Diffusers depends on PEFT.
In my experience, at least for LyCORIS, I don't know what it is, but it is usable in Diffusers, so it is probably all usable as it is.
I looked it up and it seems like LyCORIS support in general will take a while for diffusers and PEFT, even though they added support for "Kohya-Styled LoRAs" which should've included LyCORIS but I guess not
So I should try this when doing the subprocess python3 launch.py?
#if args.always_cpu:
# cpu_state = CPUState.CPU
cpu_state = CPUState.CPU
I think this is all we can do.
Still have the same GPU error.
I'd say upload all the module files and try to implement this.
I'm not sure what Im doing wrong
It's 23:00, so tomorrow. But anyway, WebUI is trickier than we imagined...🥶
I wonder if options are being used in place of environment variables.
tomorrow
Take your time!
But anyway, WebUI is trickier than we imagined...🥶
Very true
I wonder if options are being used in place of environment variables.
Well we do have COMMANDLINE_ARGS
but it doesn't seem to have any effect in my testing
I experimented a bit, uploading the WebUI package directly to HF doesn't work, but git clone
to the same path does.
I'm not sure why, but I'm starting to understand a little.
Maybe it's just a trivial problem with HF's space specs. Something to do with line break codes or something like that.
https://huggingface.co/spaces/John6666/webui_test2
Edit:
I tried both LF
and CRLF
for newline codes, but both failed. Files that have been placed in Spaces from the beginning fail to import
even when shutil.copytree()
from another location.
There is no difference between the git clone
file and the file uploaded to Spaces when comparing them with diff
command.
The symptoms I can see now are similar to the following. This is just a symptom.
Anyway, this is not about logic or code. It is a problem with the execution environment.
https://stackoverflow.com/questions/44484082/python-cant-find-module-when-started-with-sudo
I experimented a bit, uploading the WebUI package directly to HF doesn't work, but git clone to the same path does.
You should only upload the full module folder, since a lot are missing
Anyway, this is not about logic or code. It is a problem with the execution environment.
That's weird, I'll look into it more.
You should only upload the full module folder, since a lot are missing
I've tried uploading the entire contents of the webui zip and the git clone
. Neither of them worked if I put them there beforehand. And the file contents are the same as far as I can detect with diff
, so I think the problem is the filesystem Attribute, the actual Python affiliation being run, some sort of Permission, or some other parameter that is more implicit.
I suspected it might be the github assets
, but it's not even in Forge's 4GB package, so probably not.
Edit:
https://huggingface.co/spaces/John6666/webui_test3
It worked! The code that didn't work yesterday is now working...
Well, something must have broken in the HF settings. If it works, it's OK.
Edit:
Anyway, now we can get into coding instead of fighting with Python. It's so stressful doing something other than programming to program...
Edit:
Now we are interrupted by an incompatible component of Gradio in the initialization function...
If I don't find a good route to bypass it, it seems to refer to a component that doesn't exist in 4.x.
Forge stops with another error. Currently, WebUI is easier to get up and running.
https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/16529