Datasets:

Formats:
parquet
Languages:
English
ArXiv:
Tags:
image
Libraries:
Datasets
Dask
License:

Okay to host the downloaded images?

#3
by sayakpaul HF staff - opened

Hello,

I have downloaded the full dataset with img2dataset and wanted to host the downloaded images on the Hub with all the appropriate credits!

Would that be okay?

Spawning org

Yes, that's definitely ok with us. We do plan to update the dataset as people use source.plus to flag items. It might make sense for us to set up a way to sync the repos when that happens.

Thank you so much!

I will update this thread when the upload is complete.

@jordancmeyer I have completed the upload here:
https://huggingface.co/datasets/sayakpaul/pd12m-full

The estimated number of rows is 10,778,260.

The following command was used to download the dataset:

img2dataset --url_list pd12m_full.parquet --input_format "parquet" \
 --url_col "url" --caption_col "caption" --output_format webdataset \
   --number_sample_per_shard=5000 --skip_reencode=True \
   --output_folder s3://diffusion-datasets/pd12m \
    --processes_count 16 --thread_count 64 \
     --resize_mode no \
       --enable_wandb True

WandB logs: https://wandb.ai/sayakpaul/img2dataset/runs/b8hmd5v1.

I am now working on adding a sample dataloader script with webdataset in the repository.

I think it could be nice to host the dataset under the hf.co/Spawning org. If so, I can reach out to the Hub team internally to make that happen. Let me know :)

Spawning org

@sayakpaul This is awesome, thanks for pulling this together! I had some questions on the count of records - the wandb logs looks like there were several large spikes of failures. I'm curious, do you have any insight into those failures? I want to be certain the reason for the missing records aren't due to any issues with the parquets/urls/files.

It would be awesome to move this under the Spawning org! That would be very much appreciated!

Umm, I currently I don’t have a solid insight honestly. What I could do is try to see if the URLs were logged with the downloaded sharded and then use them to get a list of the URLs that weren’t downloaded and inspect manually.

I will get back here.

Thanks for the kind words!

Spawning org

@sayakpaul That would be very helpful. We can make a task to follow up on the missing images as well. The wandb logs show sudden spikes of errors, which could be consistent with short network outages. I didn't want to jump to conclusions if you had a better idea, but that makes sense to me.

I just wanted to make sure we hadn't missed any potential issues in our verification process. Either way, confirming the cause would be helpful. Thanks again!

Thanks once again, @padge , for being so welcoming!

The wandb logs show sudden spikes of errors, which could be consistent with short network outages. I didn't want to jump to conclusions if you had a better idea, but that makes sense to me.

I think it's almost likely because of network-related issues, which can be hard to prevent. I am still running an entire iteration through the downloaded shards to get the list of the successful URLs. After that, my plan is to obtain the URLs that weren't downloaded and inspect them.

If all goes well, we should be able to download those and still prepare shards out of them. Does this sound like a plan?

Regardless, I will keep this thread updated.

Spawning org

@sayakpaul I totally understand. That sounds perfect, thanks for following up!

@padge I have managed to make progress :)

First, this is how I compute the missing URLs.

Step 1 is to grab all the URLs shipped

For this I leverage https://huggingface.co/datasets/sayakpaul/pd12m-full/blob/main/original_parquet/pd12m_full.parquet (collated from all the parquet files in this repo).

import pandas as pd
import json

df = pd.read_parquet("pd12m_full.parquet", columns=["url"])
all_urls = set(df["url"])
print(f"{len(all_urls)=}")

downloaded_urls = {"urls": list(all_urls), "total_urls": len(all_urls)}
with open("actual_urls.json", "w") as f:
    json.dump(downloaded_urls, f)

Step 2 is to compute the actual downloaded URLs:

import webdataset as wds
import json

def preprocess_fn(sample):
    return {"metadata": sample["json"]}

def collation_fn(examples):
    inputs = {"batch_metadata": [example["metadata"] for example in examples]}
    return inputs

def main():
    dataset_path = "pipe:curl -s -f -L https://huggingface.co/datasets/sayakpaul/pd12m-full/resolve/main/{00155..02480}.tar"

    dataset = wds.WebDataset(dataset_path, handler=wds.warn_and_continue, empty_check=False).decode()
    dataset = dataset.map(preprocess_fn, handler=wds.warn_and_continue)
    dataset = dataset.batched(8192, partial=False, collation_fn=collation_fn); print("dataset created")
    dataloader = wds.WebLoader(
        dataset,
        batch_size=None,
        pin_memory=True,
        persistent_workers=True,
        num_workers=8,
        prefetch_factor=2,
    ); print("dataloader created")
        
    all_urls = set()

    for batch in dataloader:
        batch_metadata = batch["batch_metadata"]
        for metadata in batch_metadata:
            if metadata["status"] == "success":
                all_urls.add(metadata["url"])
    
    downloaded_urls = {"urls": list(all_urls), "total_urls": len(all_urls)}
    with open("downloaded_urls.json", "w") as f:
        json.dump(downloaded_urls, f)


if __name__ == "__main__":
    main()

Step 3 is to compute the differences:

import json 
import pandas as pd

with open("actual_urls.json", "r") as f:
    actual = json.load(f)

with open("downloaded_urls.json", "r") as f:
    downloaded = json.load(f)

actual_urls = set(sorted(actual["urls"]))
downloaded_urls = set(sorted(downloaded["urls"]))
differences = actual_urls - downloaded_urls
print(f"{len(differences)=}, {len(actual_urls)=}, {len(downloaded_urls)=}")

df = pd.read_parquet("pd12m_full.parquet")
filtered_df = df[df["url"].isin(differences)]
print(len(filtered_df))

Output:

len(differences)=824798, len(actual_urls)=12400094, len(downloaded_urls)=11575296
824798

Now, I will use filtered_df to download the rest of the images and prepare webdataset shards with img2dataset. Will update once done.

Sorry for the long message but I wanted to ensure all the information is transparent.

@padge @jordancmeyer I have successfully, pushed the rest of the shards for the URLs that were not downloaded. Repo: https://huggingface.co/datasets/sayakpaul/pd12m-full. The row count is estimated but can be verified by using the dataloader I showed in the step 2 of this comment. It doesn't match the exact number but it's close enough.

I can confirm that it was indeed because of network failures and there weren't any problems with the original S3 URLs. The wandb log for downloading the missing URLs: https://wandb.ai/diffusion-guidance/img2dataset/runs/ivby8ulg.

I think we can now transfer this dataset to the Spawning org now and we can continue to work on updating the dataset card?

Spawning org

@sayakpaul Thanks for all your hard work, and chasing down those missing images! Yes I think it's good to transfer over.

Closing this as the migration is done https://huggingface.co/datasets/Spawning/pd12m-full

sayakpaul changed discussion status to closed

Sign up or log in to comment