Spaces:
Running
on
A10G
Running
on
A10G
Anonymous-sub
commited on
Commit
•
f9fde4d
1
Parent(s):
2d1c3ea
Update app.py
Browse files
app.py
CHANGED
@@ -612,12 +612,12 @@ DESCRIPTION = '''
|
|
612 |
### This space provides the function of key frame translation. Full code for full video translation will be released upon the publication of the paper.
|
613 |
### To avoid overload, we set limitations to the maximum frame number and the maximum frame resolution.
|
614 |
### Tips:
|
615 |
-
1. This method cannot handle large or quick motions where the optical flow is hard to estimate. Videos with stable motions are preferred
|
616 |
2. Pixel-aware fusion may not work for large or quick motions.
|
617 |
-
3.
|
618 |
-
4.
|
619 |
-
5.
|
620 |
-
6.
|
621 |
'''
|
622 |
|
623 |
block = gr.Blocks().queue()
|
|
|
612 |
### This space provides the function of key frame translation. Full code for full video translation will be released upon the publication of the paper.
|
613 |
### To avoid overload, we set limitations to the maximum frame number and the maximum frame resolution.
|
614 |
### Tips:
|
615 |
+
1. This method cannot handle large or quick motions where the optical flow is hard to estimate. **Videos with stable motions are preferred**.
|
616 |
2. Pixel-aware fusion may not work for large or quick motions.
|
617 |
+
3. Try different color-aware AdaIN settings and even unuse it to avoid color jittering.
|
618 |
+
4. `revAnimated_v11` model for non-photorealstic style, `realisticVisionV20_v20` model for photorealstic style.
|
619 |
+
5. To use your own SD/LoRA model, you may clone the space and speficify your model with [sd_model_cfg.py](https://huggingface.co/spaces/Anonymous-sub/Rerender/blob/main/sd_model_cfg.py).
|
620 |
+
6. This method is based on the original SD model. You may need to [convert](https://github.com/huggingface/diffusers/blob/main/scripts/convert_diffusers_to_original_stable_diffusion.py) Diffuser/Automatic1111 models to the original one.
|
621 |
'''
|
622 |
|
623 |
block = gr.Blocks().queue()
|