FernandoGuerraVA's picture
Update README.md
2a14c7d verified
metadata
license: apache-2.0

Motion LoRA Model Overview for Vertigo Effect Simulation This document provides a comprehensive overview of a specialized Motion LoRA model designed for simulating dynamic scenes with an emphasis on the vertigo effect using a dolly zoom technique. This model is built for researchers and developers with interests in advanced motion models, particularly for cinematic effects like the one famously used in "Jaws".

License and User Agreement Apache License 2.0: This model is released under the Apache License 2.0, and users must comply with its terms. Commercial Usage Disclaimer: Users assume responsibility for any commercial legal disputes related to the use of this model. Consent by Installation: By installing this model, users agree to the stated terms and conditions. Model Demonstrations Specific Configuration for the Vertigo Effect Model Training Setup:

Resolution: 512x384, optimized for capturing the intricate details of the vertigo effect. LoRA Rank: Utilizes a 64 LoRA rank to enhance learning efficiency and model adaptability. Training Duration: Implements 16 frames across 5 sequences with a progression marked by 200 and 300-step checkpoints for detailed refinement. Video Input: Trained with a singular clip, focused on accurately reproducing the vertigo effect through detailed visual inputs. Training Configuration:

Training Model: realisticvisionv51, a highly advanced base model known for generating realistic and detailed visuals. Training Prompt: "A man, vertigo effect, dolly zoom," directing the model to specifically learn and replicate the dynamic dolly zoom effect associated with the vertigo sensation. Text Configuration: Utilizes tailored text configurations to precisely guide the model in generating footage that showcases the vertigo effect with a dolly zoom on a man. Model Specifications and Optimization:

Inference Model: Consistent with the training model to ensure high fidelity in output. Additional Parameters: use_offset_noise is activated to introduce realistic variations and enhance the authenticity of the vertigo effect. Learning Rate Adjustments: The learning rate, spatial learning rate, and Adam weight decay are dynamically adjusted between 5x to 20x the original rate based on the dataset size, ensuring optimal learning efficiency. Workflow Compatibility: Designed to integrate seamlessly with the Animatediff workflow, with recommended weights set between 0.5 and 1 to balance between visual quality and computational demand.