dataautogpt3
commited on
Commit
•
4130dbf
1
Parent(s):
2f7bfaa
Update README.md
Browse files
README.md
CHANGED
@@ -42,35 +42,43 @@ license: cc-by-nc-nd-4.0
|
|
42 |
---
|
43 |
<Gallery />
|
44 |
|
45 |
-
|
|
|
|
|
46 |
|
47 |
-
|
|
|
48 |
|
49 |
-
## Introduction
|
50 |
-
|
51 |
-
Constructive Deconstruction is a novel approach to debiasing diffusion models used in generative tasks like image synthesis. This method enhances the quality and fidelity of generated images across various domains by removing biases inherited from the training data. Our technique involves overtraining the model to a controlled noisy state, applying nightshading, and using bucketing techniques to realign the model's internal representations.
|
52 |
-
|
53 |
-
## Methodology
|
54 |
-
|
55 |
-
### Overtraining to Controlled Noisy State
|
56 |
By purposely overtraining the model until it predictably fails, we create a controlled noisy state. This state helps in identifying and addressing the inherent biases in the model's training data.
|
|
|
57 |
|
58 |
-
### Nightshading
|
59 |
Nightshading is repurposed to induce a controlled failure, making it easier to retrain the model. This involves injecting carefully selected data points to stress the model and cause predictable failures.
|
|
|
60 |
|
61 |
-
### Bucketing
|
62 |
Using mathematical techniques like slerp (Spherical Linear Interpolation) and bislerp (Bilinear Interpolation), we merge the induced noise back into the model. This step highlights the model's learned knowledge while suppressing biases.
|
|
|
63 |
|
64 |
-
### Retraining and Fine-Tuning
|
65 |
The noisy state is retrained on a large, diverse dataset to create a new base model called "Mobius." Initial issues such as grainy details and inconsistent colors are resolved during fine-tuning, resulting in high-quality, unbiased outputs.
|
|
|
|
|
66 |
|
67 |
-
## Results and Highlights
|
68 |
-
|
69 |
-
### Increased Diversity of Outputs
|
70 |
Training the model on high-quality data naturally increases the diversity of the generated outputs without intentionally loosening associations. This leads to improved generalization and variety in generated images.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
71 |
|
72 |
-
### Empirical Validation
|
73 |
-
Extensive experiments and fine-tuning demonstrate the effectiveness of our method, resulting in high-quality, unbiased outputs across various styles and domains.
|
74 |
|
75 |
|
76 |
## Usage and Recommendations
|
|
|
42 |
---
|
43 |
<Gallery />
|
44 |
|
45 |
+
### Constructive Deconstruction: Domain-Agnostic Debiasing of Diffusion Models
|
46 |
+
Introduction
|
47 |
+
Constructive Deconstruction is a groundbreaking approach to debiasing diffusion models used in generative tasks like image synthesis. This method significantly enhances the quality and fidelity of generated images across various domains by removing biases inherited from the training data. Our technique involves overtraining the model to a controlled noisy state, applying nightshading, and using bucketing techniques to realign the model's internal representations.
|
48 |
|
49 |
+
### Methodology
|
50 |
+
Overtraining to Controlled Noisy State
|
51 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
52 |
By purposely overtraining the model until it predictably fails, we create a controlled noisy state. This state helps in identifying and addressing the inherent biases in the model's training data.
|
53 |
+
Nightshading
|
54 |
|
|
|
55 |
Nightshading is repurposed to induce a controlled failure, making it easier to retrain the model. This involves injecting carefully selected data points to stress the model and cause predictable failures.
|
56 |
+
Bucketing
|
57 |
|
|
|
58 |
Using mathematical techniques like slerp (Spherical Linear Interpolation) and bislerp (Bilinear Interpolation), we merge the induced noise back into the model. This step highlights the model's learned knowledge while suppressing biases.
|
59 |
+
Retraining and Fine-Tuning
|
60 |
|
|
|
61 |
The noisy state is retrained on a large, diverse dataset to create a new base model called "Mobius." Initial issues such as grainy details and inconsistent colors are resolved during fine-tuning, resulting in high-quality, unbiased outputs.
|
62 |
+
Results and Highlights
|
63 |
+
Increased Diversity of Outputs
|
64 |
|
|
|
|
|
|
|
65 |
Training the model on high-quality data naturally increases the diversity of the generated outputs without intentionally loosening associations. This leads to improved generalization and variety in generated images.
|
66 |
+
Enhanced Quality
|
67 |
+
|
68 |
+
The fine-tuning process eliminates initial issues, leading to clear, consistent, and high-quality image outputs.
|
69 |
+
Versatility Across Styles
|
70 |
+
|
71 |
+
The Mobius model exhibits exceptional performance across various art styles and domains, surpassing other models, including those like MidJourney. The debiasing process ensures the model can handle a wide range of artistic expressions with precision and creativity.
|
72 |
+
The Best Open Source AI Image Generation Model Ever Made
|
73 |
+
Constructive Deconstruction and the Mobius model represent a monumental leap forward in AI image generation. By addressing and eliminating biases through innovative techniques, we have created the best open source AI image generation model ever made. Mobius sets a new standard for quality and diversity, enabling unprecedented levels of creativity and precision. Its versatility across styles and domains makes it the ultimate tool for artists, designers, and creators, offering a level of excellence unmatched by any other open source model.
|
74 |
+
|
75 |
+
By releasing the weights of the Mobius model, we are empowering the community with a tool that drives innovation and sets the benchmark for future developments in AI image synthesis. The quality, diversity, and reliability of Mobius make it the gold standard in the realm of open source AI models.
|
76 |
+
|
77 |
+
|
78 |
+
|
79 |
+
|
80 |
+
|
81 |
|
|
|
|
|
82 |
|
83 |
|
84 |
## Usage and Recommendations
|