Spaces:
Starting
on
A10G
xformers installation blocks attention controller
Hey,
I realized that using requirements.txt for environment setup installs xformers, which causes the attention controller to be disabled. How? On the following line, if condition always fails when xformers is installed because net_.__class__.__name__
never becomes CrossAttention
, instead it becomes MemoryEfficientCrossAttention
for CrossAttention blocks. Just wanted to warn other people from doing the same mistake.
https://huggingface.co/spaces/Anonymous-sub/Rerender/blob/main/src/ddim_v_hacked.py#L77
PS: It seems that the space also contains this possible bug.
Hi. I wonder how did you solve this bug?
By adding condition like the following?
if net_.__class__.__name__ == 'CrossAttention' or net_.__class__.__name__ == 'MemoryEfficientCrossAttention':
Since I am not deeply familiar with the internals of MemoryEfficientCrossAttention
, I've avoided to do that. Instead, I've basically uninstalled xformers to resolve the issue.
The problem I am having now is that some of the output keyframes contain artifacts (under the same input/settings). They seem like they attend excessively to other frames, any ideas on how to deal with this?
As an example, below are the same-numbered keyframe outputs (before and after solving this issue) and the corresponding input frame:
I see. The problem is that your Denoising strength may be too large.
Since our method is based on the optical flow of the input video, if the output frame looks too different from the input frame in structures (like the mouth in your case), the optical flow of the input video will not match the that of the output video. Then artifacts occur.
The way of relieving this problem is to try to maintain the structures with a small Denoising strength or large ControlNet strength .