Datasets:
Document data loading method
Browse files
README.md
CHANGED
@@ -134,10 +134,86 @@ For example, below is the image of `f3501e05-aef7-4225-a9e9-f516527408ac.png` an
|
|
134 |
- `st`: Steps
|
135 |
- `sa`: Sampler
|
136 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
137 |
### Data Splits
|
138 |
|
139 |
We split 2 million images into 2,000 folders where each folder contains 1,000 images and a JSON file.
|
140 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
141 |
## Dataset Creation
|
142 |
|
143 |
### Curation Rationale
|
|
|
134 |
- `st`: Steps
|
135 |
- `sa`: Sampler
|
136 |
|
137 |
+
At the top level folder of DiffusionDB, we include a metadata table in Parquet format `metadata.parquet`.
|
138 |
+
This table has seven columns: `image_name`, `prompt`, `part_id`, `seed`, `step`, `cfg`, and `sampler`, and it has 2 million rows where each row represents an image. `seed`, `step`, and `cfg` are We choose Parquet because it is column-based: researchers can efficiently query individual columns (e.g., prompts) without reading the entire table. Below are the five random rows from the table.
|
139 |
+
|
140 |
+
| image_name | prompt | part_id | seed | step | cfg | sampler |
|
141 |
+
|------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------|------------|------|-----|---------|
|
142 |
+
| 49f1e478-ade6-49a8-a672-6e06c78d45fc.png | ryan gosling in fallout 4 kneels near a nuclear bomb | 1643 | 2220670173 | 50 | 7.0 | 8 |
|
143 |
+
| b7d928b6-d065-4e81-bc0c-9d244fd65d0b.png | A beautiful robotic woman dreaming, cinematic lighting, soft bokeh, sci-fi, modern, colourful, highly detailed, digital painting, artstation, concept art, sharp focus, illustration, by greg rutkowski | 87 | 51324658 | 130 | 6.0 | 8 |
|
144 |
+
| 19b1b2f1-440e-4588-ba96-1ac19888c4ba.png | bestiary of creatures from the depths of the unconscious psyche, in the style of a macro photograph with shallow dof | 754 | 3953796708 | 50 | 7.0 | 8 |
|
145 |
+
| d34afa9d-cf06-470f-9fce-2efa0e564a13.png | close up portrait of one calico cat by vermeer. black background, three - point lighting, enchanting, realistic features, realistic proportions. | 1685 | 2007372353 | 50 | 7.0 | 8 |
|
146 |
+
| c3a21f1f-8651-4a58-a4d4-7500d97651dc.png | a bottle of jack daniels with the word medicare replacing the word jack daniels | 243 | 1617291079 | 50 | 7.0 | 8 |
|
147 |
+
|
148 |
+
To save space, we use an integer to encode the `sampler` in the table above.
|
149 |
+
|
150 |
+
|Sampler|Integer Value|
|
151 |
+
|:--|--:|
|
152 |
+
|ddim|1|
|
153 |
+
|plms|2|
|
154 |
+
|k_euler|3|
|
155 |
+
|k_euler_ancestral|4|
|
156 |
+
|ddik_heunm|5|
|
157 |
+
|k_dpm_2|6|
|
158 |
+
|k_dpm_2_ancestral|7|
|
159 |
+
|k_lms|8|
|
160 |
+
|others|9|
|
161 |
+
|
162 |
### Data Splits
|
163 |
|
164 |
We split 2 million images into 2,000 folders where each folder contains 1,000 images and a JSON file.
|
165 |
|
166 |
+
### Loading Data Subsets
|
167 |
+
|
168 |
+
DiffusionDB is large (1.6TB)! However, with our modularized file structure, you can easily load a desirable number of images and their prompts and hyperparameters. In the [`example-loading.ipynb`](https://github.com/poloclub/diffusiondb/blob/main/notebooks/example-loading.ipynb) notebook, we demonstrate three methods to load a subset of DiffusionDB. Below is a short summary.
|
169 |
+
|
170 |
+
#### Method 1: Using Hugging Face Datasets Loader
|
171 |
+
|
172 |
+
You can use the Hugging Face [`Datasets`](https://huggingface.co/docs/datasets/quickstart) library to easily load prompts and images from DiffusionDB. We pre-defined 16 DiffusionDB subsets (configurations) based on the number of instances. You can see all subsets in the [Dataset Preview](https://huggingface.co/datasets/poloclub/diffusiondb/viewer/all/train).
|
173 |
+
|
174 |
+
```python
|
175 |
+
import numpy as np
|
176 |
+
from datasets import load_dataset
|
177 |
+
|
178 |
+
# Load the dataset with the `random_1k` subset
|
179 |
+
dataset = load_dataset('poloclub/diffusiondb', 'random_1k')
|
180 |
+
```
|
181 |
+
|
182 |
+
#### Method 2. Manually Download the Data
|
183 |
+
|
184 |
+
All zip files in DiffusionDB have the following URLs, where `{xxxxxx}` ranges from `000001` to `002000`. Therefore, you can write a script to download any number of zip files and use them for your task.
|
185 |
+
|
186 |
+
`https://huggingface.co/datasets/poloclub/diffusiondb/resolve/main/images/part-{xxxxxx}.zip`
|
187 |
+
|
188 |
+
```python
|
189 |
+
from urllib.request import urlretrieve
|
190 |
+
import shutil
|
191 |
+
|
192 |
+
# Download part-000001.zip
|
193 |
+
part_id = 1
|
194 |
+
part_url = f'https://huggingface.co/datasets/poloclub/diffusiondb/resolve/main/images/part-{part_id:06}.zip'
|
195 |
+
urlretrieve(part_url, f'part-{part_id:06}.zip')
|
196 |
+
|
197 |
+
# Unzip part-000001.zip
|
198 |
+
shutil.unpack_archive(f'part-{part_id:06}.zip', f'part-{part_id:06}')
|
199 |
+
```
|
200 |
+
|
201 |
+
#### Method 3. Use `metadata.parquet` (Text Only)
|
202 |
+
|
203 |
+
If your task does not require images, then you can easily access all 2 million prompts and hyperparameters in the `metadata.parquet` table.
|
204 |
+
|
205 |
+
```python
|
206 |
+
from urllib.request import urlretrieve
|
207 |
+
import pandas as pd
|
208 |
+
|
209 |
+
# Download the parquet table
|
210 |
+
table_url = f'https://huggingface.co/datasets/poloclub/diffusiondb/resolve/main/metadata.parquet'
|
211 |
+
urlretrieve(table_url, 'metadata.parquet')
|
212 |
+
|
213 |
+
# Read the table using Pandas
|
214 |
+
metadata_df = pd.read_parquet('metadata.parquet')
|
215 |
+
```
|
216 |
+
|
217 |
## Dataset Creation
|
218 |
|
219 |
### Curation Rationale
|