Cristian0S
commited on
Commit
•
ba4e69e
1
Parent(s):
25f7223
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,60 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Penelope Palette: Portrait Generation Model
|
2 |
+
|
3 |
+
Important note : Provisory Model card
|
4 |
+
|
5 |
+
## Model Description
|
6 |
+
Penelope Palette is an advanced AI model designed for creating lifelike portraits. It leverages the same architecture as Stable Diffusion 3, ensuring high-quality image generation with remarkable detail and style. Most of the description was copied from the stable diffusion 3 since the informations remains generally the same.
|
7 |
+
The model is weaker than Stable Diffusion 3 medium , having trouble generating realistic content ; nudity and anatomy but it performs really good in portraits , having a unique style .
|
8 |
+
## Model Description
|
9 |
+
Developed by: Penelope Systems
|
10 |
+
|
11 |
+
Model type: MMDiT text-to-image generative model
|
12 |
+
|
13 |
+
Model Description: This is a model that can be used to generate images based on text prompts. It is a Multimodal Diffusion Transformer (https://arxiv.org/abs/2403.03206) that uses three fixed, pretrained text encoders (OpenCLIP-ViT/G, CLIP-ViT/L and T5-xxl)
|
14 |
+
|
15 |
+
# License
|
16 |
+
Apache llicense 2.0
|
17 |
+
|
18 |
+
# Model Sources
|
19 |
+
|
20 |
+
For local or self-hosted use, we recommend ComfyUI for inference.
|
21 |
+
It has built-in clip so it shoul be plug & play .
|
22 |
+
|
23 |
+
ComfyUI: https://github.com/comfyanonymous/ComfyUI
|
24 |
+
|
25 |
+
|
26 |
+
# Training Dataset
|
27 |
+
We used synthetic data and filtered publicly available data to train our models. The model was pre-trained on 1 billion images. The fine-tuning data includes 30M high-quality aesthetic images focused on specific visual content and style, as well as 3M preference data images.
|
28 |
+
|
29 |
+
|
30 |
+
# Uses
|
31 |
+
Intended Uses
|
32 |
+
Intended uses include the following:
|
33 |
+
|
34 |
+
Generation of artworks and use in design and other artistic processes.
|
35 |
+
Applications in educational or creative tools.
|
36 |
+
Research on generative models, including understanding the limitations of generative models.
|
37 |
+
|
38 |
+
|
39 |
+
# Out-of-Scope Uses
|
40 |
+
The model was not trained to be factual or true representations of people or events. As such, using the model to generate such content is out-of-scope of the abilities of this model.
|
41 |
+
|
42 |
+
#Safety
|
43 |
+
Same safety measures used by Stable Diffusion 3 were deployed .
|
44 |
+
|
45 |
+
# Use recommendations :
|
46 |
+
For best use we recommand :
|
47 |
+
- steps : 32
|
48 |
+
- cfg : between 4.0 and 7.0
|
49 |
+
- sampler_name : dpmpp_2m
|
50 |
+
- scheduler : sgm_uniform
|
51 |
+
|
52 |
+
|
53 |
+
|
54 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/632776ce8624baac667ecb01/NAQiWjoYqdqjcgER8QKys.png)
|
55 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/632776ce8624baac667ecb01/KjpWPX-ruB1MLQJpMU-sU.png)
|
56 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/632776ce8624baac667ecb01/Fkmkb9Db50i07N76182Ih.png)
|
57 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/632776ce8624baac667ecb01/c4IOnm7pW3JU4_ogU6A-H.png)
|
58 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/632776ce8624baac667ecb01/CO8agFO7rCCsjrplz_ukZ.png)
|
59 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/632776ce8624baac667ecb01/ZCAKZ6lZouNFgHIOESKnM.png)
|
60 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/632776ce8624baac667ecb01/Z8SC0qcBrdZkW8cp9Uvyb.png)
|