MultiView-InContext-Lora
Model description
Inspired by In-Context-LoRA, this project aims to generate multi-view images of the same scene or object simultaneously. By using flux with the multiview-incontext-lora, we can divide the images into portions to obtain novel views.
NOTE: This is a beta release of the model. The consistency between views may not be perfect, and the model might sometimes generate views that don't perfectly align or maintain exact object positions across viewpoints. I am working on improving the geometric consistency and spatial relationships between generated views.
News
- 2024-11-25: Release beta v0.3 model checkpoint, the consistency between views has been improved a lot compared to the previous version.
Roadmap
- ๐ Improve the consistency between the two-view images.
- Add camera control to the prompt to manage the similarity between the two views.
- 4๏ธโฃ Generate 4 views of a scene in a grid format.
- ๐งธ Generate 4 canonical coordinates view points of a single object in a grid format.
- ๐๏ธ 3D reconstruction from multi-view images.
When applying the LoRA to the FluxInpaint Pipeline, I noticed significant degradation in consistency between the generated and input views. Therefore, I plan to also train the LoRA for the FluxFill model instead of the original Flux text-to-image model to improve performance.
Inference
import torch
from diffusers import FluxPipeline
pipeline = FluxPipeline.from_pretrained(
"black-forest-labs/FLUX.1-dev",
torch_dtype=torch.bfloat16,
)
pipeline.load_lora_weights(
"ysmao/multiview-incontext",
weight_name="twoview-incontext-b03.safetensors",
)
pipeline.fuse_lora()
scene_prompt = "a living room with a sofa set with cushions, side tables with table lamps, a flat screen television on a table, houseplants, wall hangings, electric lights, and a carpet on the floor"
prompt = f"[TWO-VIEWS] This set of two images presents a scene from two different viewpoints. [IMAGE1] The first image shows {scene_prompt}. [IMAGE2] The second image shows the same room but in another viewpoint."
image_height = 576
image_width = 864
output = pipeline(
prompt=prompt,
height=int(image_height),
width=int(image_width * 2),
num_inference_steps=30,
guidance_scale=3.5,
).images[0]
output.save("twoview-incontext-beta.png")
Download model
Weights for this model are available in Safetensors format.
Download them in the Files & versions tab.
- Downloads last month
- 202
Model tree for ysmao/multiview-incontext
Base model
black-forest-labs/FLUX.1-dev