Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -15,147 +15,68 @@ configs:
|
|
15 |
- split: val
|
16 |
path: "data/val.zip"
|
17 |
---
|
18 |
-
# Dataset Card for Dataset Name
|
19 |
|
20 |
-
|
21 |
|
22 |
-
|
|
|
|
|
|
|
|
|
|
|
23 |
|
24 |
-
|
|
|
|
|
25 |
|
26 |
-
|
27 |
|
28 |
-
|
|
|
|
|
|
|
29 |
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
- **Language(s) (NLP):** [More Information Needed]
|
34 |
-
- **License:** [More Information Needed]
|
35 |
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
- **Repository:** [More Information Needed]
|
41 |
-
- **Paper [optional]:** [More Information Needed]
|
42 |
-
- **Demo [optional]:** [More Information Needed]
|
43 |
-
|
44 |
-
## Uses
|
45 |
-
|
46 |
-
<!-- Address questions around how the dataset is intended to be used. -->
|
47 |
-
|
48 |
-
### Direct Use
|
49 |
-
|
50 |
-
<!-- This section describes suitable use cases for the dataset. -->
|
51 |
-
|
52 |
-
[More Information Needed]
|
53 |
-
|
54 |
-
### Out-of-Scope Use
|
55 |
-
|
56 |
-
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
|
57 |
-
|
58 |
-
[More Information Needed]
|
59 |
-
|
60 |
-
## Dataset Structure
|
61 |
|
62 |
-
|
|
|
|
|
63 |
|
64 |
-
|
|
|
65 |
|
66 |
-
|
67 |
|
68 |
-
|
|
|
69 |
|
70 |
-
|
71 |
|
72 |
-
|
|
|
|
|
73 |
|
74 |
### Source Data
|
75 |
|
76 |
-
|
|
|
77 |
|
78 |
#### Data Collection and Processing
|
79 |
|
80 |
-
|
81 |
-
|
82 |
-
[More Information Needed]
|
83 |
-
|
84 |
-
#### Who are the source data producers?
|
85 |
-
|
86 |
-
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
|
87 |
-
|
88 |
-
[More Information Needed]
|
89 |
-
|
90 |
-
### Annotations [optional]
|
91 |
-
|
92 |
-
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
|
93 |
-
|
94 |
-
#### Annotation process
|
95 |
-
|
96 |
-
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
|
97 |
-
|
98 |
-
[More Information Needed]
|
99 |
-
|
100 |
-
#### Who are the annotators?
|
101 |
-
|
102 |
-
<!-- This section describes the people or systems who created the annotations. -->
|
103 |
-
|
104 |
-
[More Information Needed]
|
105 |
-
|
106 |
-
#### Personal and Sensitive Information
|
107 |
-
|
108 |
-
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
|
109 |
-
|
110 |
-
[More Information Needed]
|
111 |
-
|
112 |
-
## Bias, Risks, and Limitations
|
113 |
-
|
114 |
-
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
115 |
-
|
116 |
-
[More Information Needed]
|
117 |
-
|
118 |
-
### Recommendations
|
119 |
-
|
120 |
-
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
121 |
-
|
122 |
-
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
|
123 |
-
|
124 |
-
## Citation [optional]
|
125 |
-
|
126 |
-
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
|
127 |
-
|
128 |
-
**BibTeX:**
|
129 |
-
|
130 |
-
[More Information Needed]
|
131 |
-
|
132 |
-
**APA:**
|
133 |
-
|
134 |
-
[More Information Needed]
|
135 |
-
|
136 |
-
## Licence:
|
137 |
-
|
138 |
-
Annotations & Website
|
139 |
-
|
140 |
-
The annotations in this dataset along with this website belong to the COCO Consortium and are licensed under a Creative Commons Attribution 4.0 License.
|
141 |
-
Images
|
142 |
-
|
143 |
-
The COCO Consortium does not own the copyright of the images. Use of the images must abide by the Flickr Terms of Use. The users of the images accept full responsibility for the use of the dataset, including but not limited to the use of any copies of copyrighted images that they may create from the dataset.
|
144 |
-
|
145 |
-
## Glossary [optional]
|
146 |
-
|
147 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
|
148 |
-
|
149 |
-
[More Information Needed]
|
150 |
-
|
151 |
-
## More Information [optional]
|
152 |
|
153 |
-
|
154 |
|
155 |
-
##
|
156 |
|
157 |
-
|
|
|
158 |
|
159 |
-
## Dataset Card Contact
|
160 |
|
161 |
-
[More Information Needed]
|
|
|
15 |
- split: val
|
16 |
path: "data/val.zip"
|
17 |
---
|
|
|
18 |
|
19 |
+
# Visible watermarks datasets
|
20 |
|
21 |
+
We have observed that while datasets such as COCO are available for object detection, the availability of datasets
|
22 |
+
specifically designed for the detection of watermarks added to images is significantly limited. Through our research,
|
23 |
+
we identified only one such dataset, which originates from the paper Wdnet: Watermark-Decomposition Network for
|
24 |
+
Visible Watermark Removal [1]. This dataset provides a collection of images along with their corresponding watermark
|
25 |
+
masks for the purpose of watermark removal. Additionally, we noted that accessing this dataset presented challenges in
|
26 |
+
terms of data accessibility and regeneration of dataset samples.
|
27 |
|
28 |
+
The CLWD Dataset, introduced in Wdnet: Watermark-Decomposition Network for Visible Watermark Removal [1],
|
29 |
+
comprises images sourced from the COCO Dataset (Lin et al., 2014) [ 2] and masks of colored watermarks featuring
|
30 |
+
random positions and opacities
|
31 |
|
32 |
+
## Dataset Details (PITA Dataset)
|
33 |
|
34 |
+
We decided to introduce the Pita dataset, which is based on images from the COCO dataset (Lin et al., 2014) [ 2] and
|
35 |
+
combines these with logos from the Open Logo Detection Challenge (Su et al., 2018) [3].
|
36 |
+
The dataset introduces several changes compared to other datasets, with a focus on the task of watermark detection
|
37 |
+
rather than watermark removal.
|
38 |
|
39 |
+
The dataset is structured into three splits: a training split, a validation split, and a test split, collectively comprising
|
40 |
+
approximately 20 000 watermarked images featuring both logos and text.
|
41 |
+
We decided to incorporate two types of labels:
|
|
|
|
|
42 |
|
43 |
+
- Text: The images are watermarked with a random font available on the computer used for generation, and the
|
44 |
+
text size is also randomized.
|
45 |
+
- Logos: The logos are sourced from the Open Logo Detection Challenge dataset (Su et al., 2018) and are
|
46 |
+
characterized by random sizes and opacities.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
47 |
|
48 |
+
The position of the logo or text is randomly selected from a set of available positions, specifically corners or the center.
|
49 |
+
This restriction was introduced based on the observation that watermarks on social media or stock image websites are
|
50 |
+
predominantly located in these positions.
|
51 |
|
52 |
+
The dataset is accompanied by command-line interface tools that facilitate reproducibility. These tools support both
|
53 |
+
YOLO and Hugging Face formats, allowing the download of the dataset and generation with ease
|
54 |
|
55 |
+
### Dataset Sources [optional]
|
56 |
|
57 |
+
- **Repository:** https://github.com/OrdinaryDev83/dnn-watermark
|
58 |
+
- **Demo [optional]:** https://huggingface.co/spaces/qfisch/watermark-detection
|
59 |
|
60 |
+
## Uses
|
61 |
|
62 |
+
- DETR with Hugging face Transformers
|
63 |
+
- YoloV8 model with ultralytics
|
64 |
+
- FastRCNN with Pytorch Lighthning
|
65 |
|
66 |
### Source Data
|
67 |
|
68 |
+
- COCO dataset (Lin et al., 2014) [ 2] and
|
69 |
+
- Open Logo Detection Challenge (Su et al., 2018) [3].
|
70 |
|
71 |
#### Data Collection and Processing
|
72 |
|
73 |
+
Generation of the dataset is **reproducible** using the cli tool of this [repository](https://github.com/OrdinaryDev83/dnn-watermark).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
74 |
|
75 |
+
A --help option is available in order to describe how to use the tool.
|
76 |
|
77 |
+
## Annotation process
|
78 |
|
79 |
+
Logo were added to COCO images by applying **rotation**, **scaling** and **opacity** changes at a random position on the image.
|
80 |
+
on the image.
|
81 |
|
|
|
82 |
|
|