license: apache-2.0
language:
- en
pipeline_tag: text-to-video
tags:
- spritesheet
- text-to-video
This Stable diffusion checkpoint allows you to generate pixel art sprite sheets from four different angles. These first images are my results after merging this model with another model trained on my wife. merging another model with this one is the easiest way to get a consistent character with each view. still requires a bit of playing around with settings in img2img to get them how you want. for left and right, I suggest picking your best result and mirroring. after you are satisfied take your photo into photoshop or Krita, remove the background, and scale to the desired size. after this you can scale back up to display your results; this also clears up some of the color murkiness in the initial outputs.
🧨 Diffusers
This model can be used just like any other Stable Diffusion model. For more information, please have a look at the Stable Diffusion.
You can also export the model to ONNX, MPS and/or FLAX/JAX.
#!pip install diffusers transformers scipy torch
from diffusers import StableDiffusionPipeline
import torch
model_id = "Onodofthenorth/SD_PixelArt_SpriteSheet_Generator"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "PixelartLSS"
image = pipe(prompt).images[0]
image.save("./pixel.png")
For the front view use "PixelartFSS"
For the right view use "PixelartRSS"
For the back view use "PixelartBSS"
For the left view use "PixelartLSS"
These are random results from the unmerged model
here's a result from a merge with my Hermione model