dn6 HF staff commited on
Commit
2e8139b
1 Parent(s): c7dbe35

Update model card (#2)

Browse files

- Upload folder using huggingface_hub (ef7fcf78c36475845ed0c567434ac9c5f962f3bb)

Files changed (1) hide show
  1. README.md +4 -12
README.md CHANGED
@@ -13,18 +13,19 @@ These motion modules are applied after the ResNet and Attention blocks in the St
13
  <td><center>
14
  masterpiece, bestquality, sunset.
15
  <br>
16
- <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/animatediff-realistic-doc.gif"
17
  alt="masterpiece, bestquality, sunset"
18
  style="width: 300px;" />
19
  </center></td>
20
  </tr>
21
  </table>
22
 
 
23
  The following example demonstrates how you can utilize the motion modules with an existing Stable Diffusion text to image model.
24
 
25
  ```python
26
  import torch
27
- from diffusers import MotionAdapter, AnimateDiffPipeline, DDIMScheduler
28
  from diffusers.utils import export_to_gif
29
 
30
  # Load the motion adapter
@@ -32,13 +33,10 @@ adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-
32
  # load SD 1.5 based finetuned model
33
  model_id = "SG161222/Realistic_Vision_V5.1_noVAE"
34
  pipe = AnimateDiffPipeline.from_pretrained(model_id, motion_adapter=adapter)
35
- scheduler = DDIMScheduler.from_pretrained(
36
  model_id,
37
  subfolder="scheduler",
38
- clip_sample=False,
39
  beta_schedule="linear",
40
- timestep_spacing="linspace",
41
- steps_offset=1
42
  )
43
  pipe.scheduler = scheduler
44
 
@@ -62,9 +60,3 @@ output = pipe(
62
  frames = output.frames[0]
63
  export_to_gif(frames, "animation.gif")
64
  ```
65
-
66
- <Tip>
67
-
68
- AnimateDiff tends to work better with finetuned Stable Diffusion models. If you plan on using a scheduler that can clip samples, make sure to disable it by setting `clip_sample=False` in the scheduler as this can also have an adverse effect on generated samples.
69
-
70
- </Tip>
 
13
  <td><center>
14
  masterpiece, bestquality, sunset.
15
  <br>
16
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/animatediff-v3-euler-a.gif"
17
  alt="masterpiece, bestquality, sunset"
18
  style="width: 300px;" />
19
  </center></td>
20
  </tr>
21
  </table>
22
 
23
+
24
  The following example demonstrates how you can utilize the motion modules with an existing Stable Diffusion text to image model.
25
 
26
  ```python
27
  import torch
28
+ from diffusers import MotionAdapter, AnimateDiffPipeline, EulerAncestralDiscreteScheduler
29
  from diffusers.utils import export_to_gif
30
 
31
  # Load the motion adapter
 
33
  # load SD 1.5 based finetuned model
34
  model_id = "SG161222/Realistic_Vision_V5.1_noVAE"
35
  pipe = AnimateDiffPipeline.from_pretrained(model_id, motion_adapter=adapter)
36
+ scheduler = EulerAncestralDiscreteScheduler.from_pretrained(
37
  model_id,
38
  subfolder="scheduler",
 
39
  beta_schedule="linear",
 
 
40
  )
41
  pipe.scheduler = scheduler
42
 
 
60
  frames = output.frames[0]
61
  export_to_gif(frames, "animation.gif")
62
  ```