alfredplpl commited on
Commit
b0a9bcf
1 Parent(s): 04ea78e
Files changed (1) hide show
  1. README.md +19 -51
README.md CHANGED
@@ -1,24 +1,32 @@
1
- # Picasso Diffusion 1.1 Model Card
 
 
 
 
 
 
2
 
3
- # Introduction
4
- Picasso Diffusion is the latent diffusion model made for AI art.
 
5
 
6
- # Legal and ethical information
7
- We create this model legally.
8
- However, we think that this model have ethical problems.
9
- Therefore, we cannot use the model for commercially except for news reporting.
10
 
11
  # Usage
12
- You can try the model by our [Space](https://huggingface.co/spaces/aipicasso/demo).
13
  I recommend to use the model by Web UI.
14
- You can download the model [here](v1-1.ckpt). Safetensor version is [here](v1-1.safetensor).
 
 
 
 
15
 
16
 
17
  ## Model Details
18
  - **Developed by:** Robin Rombach, Patrick Esser, Alfred Increment
19
  - **Model type:** Diffusion-based text-to-image generation model
20
  - **Language(s):** English
21
- - **License:** [CreativeML Open RAIL++-M-NC License](MODEL-LICENSE)
22
  - **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([OpenCLIP-ViT/H](https://github.com/mlfoundations/open_clip)).
23
  - **Resources for more information:** [GitHub Repository](https://github.com/Stability-AI/).
24
  - **Cite as:**
@@ -32,45 +40,5 @@ You can download the model [here](v1-1.ckpt). Safetensor version is [here](v1-1.
32
  pages = {10684-10695}
33
  }
34
 
35
- ## Examples
36
-
37
- - Web UI
38
- - Diffusers
39
-
40
- ## Web UI
41
- **Run with --no-half option. I recommend to install [xformers](https://github.com/facebookresearch/xformers).**
42
- Download the model [here](v1-1.ckpt).
43
- Then, install [Web UI](https://github.com/AUTOMATIC1111/stable-diffusion-webui) by AUTIMATIC1111.
44
-
45
- ## Diffusers
46
-
47
- Using the [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run Picassso Diffusion 1.0 in a simple and efficient manner.
48
-
49
- ```bash
50
- pip install --upgrade git+https://github.com/huggingface/diffusers.git transformers accelerate scipy
51
- ```
52
-
53
- Running the pipeline (if you don't swap the scheduler it will run with the default DDIM, in this example we are swapping it to EulerDiscreteScheduler):
54
-
55
- ```python
56
- from diffusers import StableDiffusionPipeline, EulerAncestralDiscreteScheduler
57
- import torch
58
-
59
- model_id = "alfredplpl/picasso-diffusion-1-1"
60
-
61
- scheduler = EulerAncestralDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler")
62
- pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, torch_dtype=torch.float16)
63
- pipe = pipe.to("cuda")
64
-
65
- prompt = "anime, masterpiece, a portrait of a girl, good pupil, 4k, detailed"
66
- negative_prompt="deformed, blurry, bad anatomy, bad pupil, disfigured, poorly drawn face, mutation, mutated, extra limb, ugly, poorly drawn hands, bad hands, fused fingers, messy drawing, broken legs censor, low quality, mutated hands and fingers, long body, mutation, poorly drawn, bad eyes, ui, error, missing fingers, fused fingers, one hand with more than 5 fingers, one hand with less than 5 fingers, one hand with more than 5 digit, one hand with less than 5 digit, extra digit, fewer digits, fused digit, missing digit, bad digit, liquid digit, long body, uncoordinated body, unnatural body, lowres, jpeg artifacts, 3d, cg, text, japanese kanji"
67
- images = pipe(prompt,negative_prompt=negative_prompt, num_inference_steps=20).images
68
- images[0].save("girl.png")
69
- ```
70
-
71
- **Notes**:
72
- - Despite not being a dependency, we highly recommend you to install [xformers](https://github.com/facebookresearch/xformers) for memory efficient attention (better performance)
73
- - If you have low GPU RAM available, make sure to add a `pipe.enable_attention_slicing()` after sending it to `cuda` for less VRAM usage (to the cost of speed)
74
-
75
 
76
- *This model card was written by: AI Picasso Inc. and is based on the [Stable Diffusion v2](https://huggingface.co/stabilityai/stable-diffusion-2/raw/main/README.md)
 
1
+ ---
2
+ license: other
3
+ tags:
4
+ - stable-diffusion
5
+ - text-to-image
6
+ inference: false
7
+ ---
8
 
9
+ # Untitled Model Card
10
+
11
+ Japanese version is [here](README_jp.md).
12
 
13
+ # Introduction
14
+ Untitled is the latent diffusion model made for AI art.
 
 
15
 
16
  # Usage
 
17
  I recommend to use the model by Web UI.
18
+ You can download the model [here](untitled.safetensor).
19
+ Then, install [Web UI](https://github.com/AUTOMATIC1111/stable-diffusion-webui) by AUTIMATIC1111.
20
+
21
+ # Examples
22
+
23
 
24
 
25
  ## Model Details
26
  - **Developed by:** Robin Rombach, Patrick Esser, Alfred Increment
27
  - **Model type:** Diffusion-based text-to-image generation model
28
  - **Language(s):** English
29
+ - **License:** [CreativeML Open RAIL++-M-NC License](MODEL-LICENSE), [AGPL-3.0](https://www.gnu.org/licenses/agpl-3.0.ja.html)
30
  - **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([OpenCLIP-ViT/H](https://github.com/mlfoundations/open_clip)).
31
  - **Resources for more information:** [GitHub Repository](https://github.com/Stability-AI/).
32
  - **Cite as:**
 
40
  pages = {10684-10695}
41
  }
42
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
43
 
44
+ *This model card was written by: Alfred Increment and is based on the [Stable Diffusion v2](https://huggingface.co/stabilityai/stable-diffusion-2/raw/main/README.md)