Can not train SD3 with LoRA
Hello! Trying to launch train_text_to_image_lora.py, but have this error:
Entry Not Found for url: https://huggingface.co/stabilityai/stable-diffusion-3-medium-diffusers/resolve/main/unet/config.json.
How can I fix it?
Launch with this:
accelerate launch train_text_to_image_lora.py
--pretrained_model_name_or_path="-"
--dataset_name="-"
--dataloader_num_workers=20
--resolution=512
--center_crop
--random_flip
--train_batch_size=16
--gradient_accumulation_steps=1
--num_train_epochs=100
--learning_rate=1e-04
--max_grad_norm=1
--lr_scheduler="cosine"
--lr_warmup_steps=0
--output_dir="-"
--report_to=wandb
--checkpointing_steps=5000
--validation_prompt="-"
--seed=1337
You need to provide your User Access Token to download files via huggingface_hub
due to the requirements to share contact details to access this repo.
You can do this with the HF_TOKEN
environment variable, so just run export HF_TOKEN=[Token]
in your terminal, where [Token] is replaced by your access token. If you are on Windows, this will be set HF_TOKEN=[Token]
instead.
You need to provide your User Access Token to download files via
huggingface_hub
due to the requirements to share contact details to access this repo.You can do this with the
HF_TOKEN
environment variable, so just runexport HF_TOKEN=[Token]
in your terminal, where [Token] is replaced by your access token. If you are on Windows, this will beset HF_TOKEN=[Token]
instead.
Hello! Thank you for your answer. Unfortunately, with exported HF_TOKEN I have the same issue:(
[rank1]: Entry Not Found for url: https://huggingface.co/stabilityai/stable-diffusion-3-medium-diffusers/resolve/main/unet/config.json.
maybe it is some problem with the repo https://huggingface.co/stabilityai/stable-diffusion-3-medium-diffusers? Because there is no such file unet/config.json there
Yes I am having the same issue. May be it is not ready yet, because if you look at the repos of earlier versions they all have this file :/ I am confused and do not find much resources online. Please let me know if you managed to solve it
any update?