Files changed (1) hide show
  1. README.md +3 -6
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
 
3
- inference: false
4
  co2_eq_emissions:
5
  emissions: 450300
6
  source: MLCo2 Machine Learning Impact calculator
@@ -19,7 +19,7 @@ task:
19
  type: text-to-image
20
  ---
21
 
22
- # DALL·E Mega Model Card
23
  This model card focuses on the DALL·E Mega model associated with the DALL·E mini space on Hugging Face, available [here](https://huggingface.co/spaces/dalle-mini/dalle-mini). The app is called “dalle-mini”, but incorporates “[DALL·E Mini](https://wandb.ai/dalle-mini/dalle-mini/reports/DALL-E-mini-Generate-images-from-any-text-prompt--VmlldzoyMDE4NDAy)” and “[DALL·E Mega](https://wandb.ai/dalle-mini/dalle-mini/reports/DALL-E-Mega-Training-Journal--VmlldzoxODMxMDI2)” models. The DALL·E Mega model is the largest version of DALLE Mini. For more information specific to DALL·E Mini, see the [DALL·E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).
24
 
25
  ## Model Details
@@ -97,13 +97,10 @@ The model developers discuss the limitations of the model further in the DALL·E
97
 
98
  ### Bias
99
  **CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.**
100
-
101
  The model was trained on unfiltered data from the Internet, limited to pictures with English descriptions. Text and images from communities and cultures using other languages were not utilized. This affects all output of the model, with white and Western culture asserted as a default, and the model’s ability to generate content using non-English prompts is observably lower quality than prompts in English.
102
 
103
  While the capabilities of image generation models are impressive, they may also reinforce or exacerbate societal biases. The extent and nature of the biases of DALL·E Mini and DALL·E Mega models have yet to be fully documented, but initial testing demonstrates that they may generate images that contain negative stereotypes against minoritized groups. Work to analyze the nature and extent of the models’ biases and limitations is ongoing.
104
-
105
-
106
- Our current analyses demonstrate that:
107
  * Images generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
108
  * When the model generates images with people in them, it tends to output people who we perceive to be white, while people of color are underrepresented.
109
  * Images generated by the model can contain biased content that depicts power differentials between people of color and people who are white, with white people in positions of privilege.
 
1
  ---
2
 
3
+ inference: true
4
  co2_eq_emissions:
5
  emissions: 450300
6
  source: MLCo2 Machine Learning Impact calculator
 
19
  type: text-to-image
20
  ---
21
 
22
+ # DALL·E Mega Model Car
23
  This model card focuses on the DALL·E Mega model associated with the DALL·E mini space on Hugging Face, available [here](https://huggingface.co/spaces/dalle-mini/dalle-mini). The app is called “dalle-mini”, but incorporates “[DALL·E Mini](https://wandb.ai/dalle-mini/dalle-mini/reports/DALL-E-mini-Generate-images-from-any-text-prompt--VmlldzoyMDE4NDAy)” and “[DALL·E Mega](https://wandb.ai/dalle-mini/dalle-mini/reports/DALL-E-Mega-Training-Journal--VmlldzoxODMxMDI2)” models. The DALL·E Mega model is the largest version of DALLE Mini. For more information specific to DALL·E Mini, see the [DALL·E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).
24
 
25
  ## Model Details
 
97
 
98
  ### Bias
99
  **CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.**
 
100
  The model was trained on unfiltered data from the Internet, limited to pictures with English descriptions. Text and images from communities and cultures using other languages were not utilized. This affects all output of the model, with white and Western culture asserted as a default, and the model’s ability to generate content using non-English prompts is observably lower quality than prompts in English.
101
 
102
  While the capabilities of image generation models are impressive, they may also reinforce or exacerbate societal biases. The extent and nature of the biases of DALL·E Mini and DALL·E Mega models have yet to be fully documented, but initial testing demonstrates that they may generate images that contain negative stereotypes against minoritized groups. Work to analyze the nature and extent of the models’ biases and limitations is ongoing.
103
+ t analyses demonstrate that:
 
 
104
  * Images generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
105
  * When the model generates images with people in them, it tends to output people who we perceive to be white, while people of color are underrepresented.
106
  * Images generated by the model can contain biased content that depicts power differentials between people of color and people who are white, with white people in positions of privilege.