Error when running

#38
by caldwecg - opened

Hello, I am unable to get the safety checker to work when running the regular stablediffusionpipeline, but when trying to run the StableDiffusionPipelineSafe, I get the below error. I am running on an AWS instance.

Uploading Screen Shot 2023-02-19 at 9.54.31 AM.png…

NotImplementedError: No operator found for memory_efficient_attention_forward with inputs:
query : shape=(24, 4096, 1, 40) (torch.float32)
key : shape=(24, 4096, 1, 40) (torch.float32)
value : shape=(24, 4096, 1, 40) (torch.float32)
attn_bias : <class 'NoneType'>
p : 0.0
cutlassF is not supported because:
device=cpu (supported: {'cuda'})
flshattF is not supported because:
device=cpu (supported: {'cuda'})
dtype=torch.float32 (supported: {torch.bfloat16, torch.float16})
tritonflashattF is not supported because:
device=cpu (supported: {'cuda'})
dtype=torch.float32 (supported: {torch.bfloat16, torch.float16})
smallkF is not supported because:
max(query.shape[-1] != value.shape[-1]) > 32
unsupported embed per head: 40

hi did u solve this issue ? if so please help

your pytorch version doesnt match with xformers, currently aws sagemarker uses pytorch 2.0
this is probably your error:
xFormers can't load C++/CUDA extensions. xFormers was built for:
PyTorch 1.13.1+cu117 with CUDA 1107 (you have 2.0.0+cu117)
Python 3.9.16 (you have 3.9.16)
Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers)
Memory-efficient attention, SwiGLU, sparse and more won't be available.

Did it work, because I reinstalled it and the error still appears

You need to downgrade Pytorch from 2.0 to PyTorch 1.13.1 because xFormers was built for:
PyTorch 1.13.1+cu117 with CUDA 1107

uninstall xformers can solve this
torch1.13.1 cuda118

Sign up or log in to comment