--- license: cc-by-4.0 --- ![Intro Image](cosmicman_samples.png) CosmicMan is a text-to-image foundation model specialized for generating high-fidelity human images. For more information, please refer to our research paper: [CosmicMan: A Text-to-Image Foundation Model for Humans](https://arxiv.org/abs/2404.01294). Our model is based on [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5). This repository provide UNet checkpoints for CosmicMan-SD. ## Requirements ```python conda create -n cosmicman python=3.10 source activate cosmicman pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118 pip install accelerate diffusers datasets transformers botocore invisible-watermark bitsandbytes gradio==3.48.0 ``` ## Inference ```python import torch from diffusers import StableDiffusionPipeline, UNet2DConditionModel, EulerDiscreteScheduler from huggingface_hub import hf_hub_download from safetensors.torch import load_file base_path = "runwayml/stable-diffusion-v1-5" unet_path = "cosmicman/CosmicMan-SD" # Load model. unet = UNet2DConditionModel.from_pretrained(unet_path, torch_dtype=torch.float16) pipe = StableDiffusionPipeline.from_pretrained(base_path, unet=unet, torch_dtype=torch.float16, variant="fp16").to("cuda") pipe.scheduler = EulerDiscreteScheduler.from_pretrained(base_path, subfolder="scheduler", torch_dtype=torch.float16) # Generate image. positive_prompt = "A closeup portrait shot against a white wall, a fit Caucasian adult female with wavy blonde hair falling above her chest wears a short sleeve silk floral dress and a floral silk normal short sleeve white blouse" negative_prompt = "" image = pipe(positive_prompt, num_inference_steps=30, guidance_scale=7.5, height=1024, width=1024, negative_prompt=negative_prompt, output_type="pil").images[0].save("output.png") ``` ## Citation Information ``` @article{li2024cosmicman, title={CosmicMan: A Text-to-Image Foundation Model for Humans}, author={Li, Shikai and Fu, Jianglin and Liu, Kaiyuan and Wang, Wentao and Lin, Kwan-Yee and Wu, Wayne}, journal={arXiv preprint arXiv:2404.01294}, year={2024} } ```