Text-to-image finetuning - ShinnosukeU/kanji_diffusion_v2
This pipeline was finetuned from CompVis/stable-diffusion-v1-4 on the ShinnosukeU/kanji_diffusion_dataset dataset. Below are some example images generated with the finetuned pipeline using the following prompts: ['A kanji for Elon Musk', 'A kanji for Internet', 'A kanji for fish', 'A kanji for ice cream']:
Pipeline usage
You can use the pipeline like so:
from diffusers import DiffusionPipeline
import torch
pipeline = DiffusionPipeline.from_pretrained("ShinnosukeU/kanji_diffusion_v2", torch_dtype=torch.float16)
prompt = "A kanji for Elon Musk"
image = pipeline(prompt).images[0]
image.save("my_image.png")
Training info
These are the key hyperparameters used during training:
- Epochs: 19
- Learning rate: 1e-05
- Batch size: 1
- Gradient accumulation steps: 4
- Image resolution: 128
- Mixed-precision: fp16
More information on all the CLI arguments and the environment are available on your wandb
run page.
Intended uses & limitations
How to use
# TODO: add an example code snippet for running this diffusion pipeline
Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
Training details
[TODO: describe the data used to train the model]
- Downloads last month
- 0
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for ShinnosukeU/kanji_diffusion_v2
Base model
CompVis/stable-diffusion-v1-4