CosmicManHQ-1.0 / README.md
cosmicman's picture
Update README.md
52196b7 verified
---
license: cc-by-4.0
extra_gated_prompt: "The CosmicMan-HQ 1.0 is available for non-commercial research purposes only. You agree not to reproduce, duplicate, copy, sell, trade, resell or exploit any portion of the images and any portion of the derived data for commercial purposes. You agree NOT to further copy, publish or distribute any portion of CosmicMan-HQ 1.0 to any third party for any purpose. Except, for internal use at a single site within the same organization it is allowed to make copies of the dataset.Shanghai AI Lab reserves the right to terminate your access to the CosmicMan-HQ 1.0 at any time. "
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
Email (Institutional Email Only): text
I agree to use this dataset for non-commercial use ONLY: checkbox
viewer: false
---
# CosmicManHQ-1.0 Dataset
<img src="./asset/data_sample.png" width="96%" height="96%">
## Overview
The CosmicMan-HQ 1.0 dataset is a large-scale, high-quality human image dataset with a mean resolution of 1488 × 1255 pixels. It is equipped with rich text annotations and is designed to support a variety of human-centric content generation tasks.
The number of images released in current dataset is 5,463,500 and all these images are sourced from [LAION-5B](https://laion.ai/blog/laion-5b/).
## Data Structure
Now we release the human parsing (in folder **'./LAION-5B-Pasing/'**) and annotations (in folder **'./label/'**) for CosmicManHQ-1.0. The human parsing paths "segm_path" are added in the annotations.
<p align="center"><img src="./asset/parsing_sample.png" width="56%" height="6%"></p>
Each annotation adheres to the subsequent Parquet format specifications, including column names and corresponding content examples:
```
{
"image_name": "3348418094672.jpg",
"height": "1028",
"width": "822",
"person_type": "full_body",
"similarity": "0.29458436369895935",
"url": "https://cdn0.matrimonio.com.co/img_g/articulos-colombia/2014-06-26-vestidos-corte-imperio/111-umbria.jpg",
"segm_path": "LAION-5B-Parsing/laion1B-nolang/00044/3348418094672.png",
"path": "LAION-5B/laion1B-nolang/00044/3348418094672.jpg",
"raw_text": "vestido de novia imperio",
"blip_text": "a woman in a white dress standing in front of a window",
"aa_text": [
{
"gender": "female",
"country": "caucasian",
"age": "adult"
},
{
"body shape": "fit"
},
...
],
"aa_ids": [
[
11
],
[
0
],
...
]
},
```
- "height": the height of image.
- "width": the width of image.
- "person_type": the type of image, contain ["full_body", "upper_body", "portrait", "headshot", "nearly_full_body", "others"].
- "similarity": the CLIP similarity of image and raw_text.
- "url": the url of image.
- "segm_path": the path of human parsing.
- "path": the path of image from Laion-5B dataset.
- "raw_text": the description from Laion-5B dataset.
- "blip_text": the description from Blip.
- "aa_text": we initially established a set of predefined body parts, where each item in the "aa_text" list corresponds to a specific part. The key-value pairs for each item are associated with a particular question. For more detailed information, please refer to the file **'./details_of_aa_text.json'**. It is important to note that in the "aa_text", only the parts with a basic inquiry (e.g., "Does the person wear any tops?") that have an affirmative response ("yes") are retained. And the "wears" key itself is not included within the "aa_text".
- "aa_ids": we utilized the SCHP model trained on the ATR dataset to extract results for each body part (as shown in the table below) and then integrated these results with "aa_text" to ascertain whether each part has a corresponding parsing ID and annotation. In other words, if the basic inquiry for body part i receives a "no" response, then both "aa_text[i]" and "aa_ids[i]" are designated as "NULL".
| Predefined Body Part | Potential Parsing Part | Potential Parsing ID |
|:-------------------:|:----------------------------------------------:|:----------------------:|
| face | face | 11 |
| body shape | in addition to the background | 0 (used to invert to get the foreground) |
| background | background | 0 |
| hair | hair | 2 |
| special clothings | upper-clothes,skirt,pants,dress | 4,5,6,7 |
| one-piece outfits | upper-clothes,skirt,pants,dress | 4,5,6,7 |
| tops | upper-clothes | 4 |
| coats | upper-clothes | 4 |
| bottoms | skirt,pants | 5,6 |
| shoes | left-shoe,right-shoe | 9,10 |
| bags | bag | 16 |
| hats | hat | 1 |
| belts | belt | 8 |
| scarf | scarf | 17 |
| others | - | - |
## Download Instructions
### Download Human Parsing Images and Annotations
The human parsing images are placed in folder **',.LAION-5B-Pasing'** of the main directory **./CosmicManHQ-1.0/** and the annotations are placed in folder **'./label/'** of the main directory **./CosmicManHQ-1.0/**.
You can obtain these resources by directly downloading this repository. Please refer to this [**page**](https://huggingface.co/docs/hub/datasets-downloading) for download instructions.
### Download Images
The images in CosmicManHQ-1.0 Dataset are source from [LAION-5B](https://laion.ai/blog/laion-5b/) and we will not directly provide these images in this repository.
As an alternative,each image download url is provided in the annotations and you can download the images by following way:
Install the 'img2dataset' package provided in this document, which supports specifying a unique save path for each downloaded image.
```
pip install -e ./img2dataset/
```
You need to first download the parquet files (in folder **'./label/'**).
Next, please using the file **'./download.py'** in the main directory **./CosmicManHQ-1.0/** to download the images. More details refer to the **'./download.py'**.
```
mkdir -p CosmicManHQ-1.0/ && cd CosmicManHQ-1.0/
python download.py \
--start_index 0 \
--end_index 256 \
--parquet_dir label/ \
--log_dir download_log/
rm -rf download_log/
```
## Citation Information
```
@article{li2024cosmicman,
title={CosmicMan: A Text-to-Image Foundation Model for Humans},
author={Li, Shikai and Fu, Jianglin and Liu, Kaiyuan and Wang, Wentao and Lin, Kwan-Yee and Wu, Wayne},
journal={arXiv preprint arXiv:2404.01294},
year={2024}
}
```
## Reference
- Li P, Xu Y, Wei Y, et al. Self-correction for human parsing[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 44(6): 3260-3271.
- Schuhmann C, Beaumont R, Vencu R, et al. Laion-5b: An open large-scale dataset for training next generation image-text models[J]. Advances in Neural Information Processing Systems, 2022, 35: 25278-25294.
- Img2dataset: https://github.com/rom1504/img2dataset
## Contributors
Shikai Li, Jianglin Fu, Kaiyuan Liu, Wentao Wang, Kwan-Yee Lin, Wayne Wu