AttributeError: 'LCMSchedulerOutput' object has no attribute 'pred_original_sample'
Hi there,
Was trying to run the script in model card, and getting following, what did I do wrong? Thanks!
D:\ComfyUI\venv\lib\site-packages\transformers\utils\hub.py:124: FutureWarning: Using TRANSFORMERS_CACHE
is deprecated and will be removed in v5 of Transformers. Use HF_HOME
instead.
warnings.warn(
D:\ComfyUI\venv\lib\site-packages\diffusers\models\transformers\transformer_2d.py:34: FutureWarning: Transformer2DModelOutput
is deprecated and will be removed in version 1.0.0. Importing Transformer2DModelOutput
from diffusers.models.transformer_2d
is deprecated and this will be removed in a future version. Please use from diffusers.models.modeling_outputs import Transformer2DModelOutput
, instead.
deprecate("Transformer2DModelOutput", "1.0.0", deprecation_message)
config.json:β100%
β769/769β[00:00<00:00,β24.5kB/s]
diffusion_pytorch_model.safetensors:β100%
β2.45G/2.45Gβ[01:14<00:00,β31.5MB/s]
model_index.json:β100%
β452/452β[00:00<?,β?B/s]
Fetchingβ15βfiles:β100%
β15/15β[09:39<00:00,β93.78s/it]
text_encoder/config.json:β100%
β907/907β[00:00<00:00,β29.1kB/s]
scheduler/scheduler_config.json:β100%
β655/655β[00:00<00:00,β21.0kB/s]
(β¦)ext_encoder/model.safetensors.index.json:β100%
β19.9k/19.9kβ[00:00<00:00,β1.28MB/s]
tokenizer/added_tokens.json:β100%
β2.59k/2.59kβ[00:00<00:00,β1.15MB/s]
model-00001-of-00004.safetensors:β100%
β4.99G/4.99Gβ[09:21<00:00,β21.1MB/s]
model-00004-of-00004.safetensors:β100%
β4.19G/4.19Gβ[05:26<00:00,β32.7MB/s]
model-00003-of-00004.safetensors:β100%
β4.87G/4.87Gβ[09:38<00:00,β35.9MB/s]
model-00002-of-00004.safetensors:β100%
β5.00G/5.00Gβ[06:40<00:00,β29.4MB/s]
tokenizer/tokenizer_config.json:β100%
β20.5k/20.5kβ[00:00<00:00,β931kB/s]
transformer/config.json:β100%
β718/718β[00:00<00:00,β60.9kB/s]
tokenizer/special_tokens_map.json:β100%
β2.54k/2.54kβ[00:00<00:00,β151kB/s]
spiece.model:β100%
β792k/792kβ[00:00<00:00,β2.66MB/s]
vae/config.json:β100%
β759/759β[00:00<00:00,β152kB/s]
diffusion_pytorch_model.safetensors:β100%
β335M/335Mβ[00:12<00:00,β29.6MB/s]
Loadingβpipelineβcomponents...:β100%
β5/5β[00:17<00:00,ββ5.28s/it]
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Loadingβcheckpointβshards:β100%
β4/4β[00:16<00:00,ββ4.36s/it]
ββ0%
β0/1β[00:13<?,β?it/s]
D:\ComfyUI\venv\lib\site-packages\diffusers\models\attention_processor.py:1584: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:455.)
hidden_states = F.scaled_dot_product_attention(
AttributeError Traceback (most recent call last)
Cell In[1], line 15
13 pipe.scheduler.config.prediction_type = "v_prediction"
14 generator = torch.manual_seed(318)
---> 15 imgs = pipe(prompt="Pirate ship trapped in a cosmic maelstrom nebula, rendered in cosmic beach whirlpool engine, volumetric lighting, spectacular, ambient lights, light pollution, cinematic atmosphere, art nouveau style, illustration art artwork by SenseiJaye, intricate detail.",
16 num_inference_steps=1,
17 num_images_per_prompt = 1,
18 generator = generator,
19 guidance_scale=1.,
20 )[0]
21 imgs[0]
File D:\ComfyUI\venv\lib\site-packages\torch\utils_contextlib.py:115, in context_decorator..decorate_context(*args, **kwargs)
112 @functools.wraps(func)
113 def decorate_context(*args, **kwargs):
114 with ctx_factory():
--> 115 return func(*args, **kwargs)
File D:\ComfyUI\venv\lib\site-packages\diffusers\pipelines\pixart_alpha\pipeline_pixart_alpha.py:942, in PixArtAlphaPipeline.call(self, prompt, negative_prompt, num_inference_steps, timesteps, sigmas, guidance_scale, num_images_per_prompt, height, width, eta, generator, latents, prompt_embeds, prompt_attention_mask, negative_prompt_embeds, negative_prompt_attention_mask, output_type, return_dict, callback, callback_steps, clean_caption, use_resolution_binning, max_sequence_length, **kwargs)
939 # compute previous image: x_t -> x_t-1
940 if num_inference_steps == 1:
941 # For DMD one step sampling: https://arxiv.org/abs/2311.18828
--> 942 latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).pred_original_sample
943 else:
944 latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs, return_dict=False)[0]
AttributeError: 'LCMSchedulerOutput' object has no attribute 'pred_original_sample'
This is diffuser's issue, they default set one-step inference in the Pixar-alpha pipeline using DMD which is not appropriate. This issue should be reported in their repo.
Before diffuser fixes this issue, you should use an older version of the diffuser to use our YOSO for one-step inference. But multi-step inference can run without bugs.