D-Rep / README.md
WenhaoWang's picture
Update README.md
ca3f1b8 verified
|
raw
history blame
1.62 kB
---
language:
- en
license: mit
size_categories:
- 10K<n<100K
task_categories:
- text-to-image
- image-feature-extraction
tags:
- diffusion models
- image copy detection
dataset_info:
features:
- name: Name
dtype: string
- name: Level
dtype: int64
- name: generated_images
dtype: image
- name: real_images
dtype: image
splits:
- name: Test
num_bytes: 2538590040.0
num_examples: 4000
- name: Train
num_bytes: 22265208436.0
num_examples: 36000
download_size: 24773596239
dataset_size: 24803798476.0
configs:
- config_name: default
data_files:
- split: Test
path: data/Test-*
- split: Train
path: data/Train-*
---
<p align="center">
<img src="https://huggingface.co/datasets/WenhaoWang/D-Rep/resolve/main/D-Rep.png" width="800">
</p>
# Summary
This is the dataset proposed in our paper [**Image Copy Detection for Diffusion Models**](https://arxiv.org/abs/2410.xxxxx) (NeurIPS 2024).
D-Rep consists of 40, 000 image-replica pairs, in which each replica is generated by a diffusion model. The 40, 000 image-replica pairs are manually labeled with 6 replication levels ranging from 0 (no replication) to 5 (total replication). We divide D-Rep into a training set with 90% (36, 000) pairs and a test set with the remaining 10% (4, 000) pairs.
# Download
### Automatical
Install the [datasets](https://huggingface.co/docs/datasets/en/installation) library first, by:
```
pip install datasets
```
Then it can be downloaded automatically with
```python
import numpy as np
from datasets import load_dataset
dataset = load_dataset('WenhaoWang/D-Rep')
```