Datasets:
Tasks:
Object Detection
Size:
10K - 100K
keremberke
commited on
Commit
•
694c613
1
Parent(s):
a204321
dataset uploaded by roboflow2huggingface package
Browse files- README.dataset.txt +6 -0
- README.md +106 -0
- README.roboflow.txt +29 -0
- data/test.zip +3 -0
- data/train.zip +3 -0
- data/valid-mini.zip +3 -0
- data/valid.zip +3 -0
- hard-hat-detection.py +152 -0
- split_name_to_num_samples.json +1 -0
- thumbnail.jpg +3 -0
README.dataset.txt
ADDED
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Hard Hats > resized640_noAugmentation-FAST
|
2 |
+
https://universe.roboflow.com/roboflow-universe-projects/hard-hats-fhbh5
|
3 |
+
|
4 |
+
Provided by a Roboflow user
|
5 |
+
License: CC BY 4.0
|
6 |
+
|
README.md
ADDED
@@ -0,0 +1,106 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
task_categories:
|
3 |
+
- object-detection
|
4 |
+
tags:
|
5 |
+
- roboflow
|
6 |
+
- roboflow2huggingface
|
7 |
+
- Construction
|
8 |
+
- Utilities
|
9 |
+
- Manufacturing
|
10 |
+
- Logistics
|
11 |
+
- Ppe
|
12 |
+
- Assembly Line
|
13 |
+
- Warehouse
|
14 |
+
- Factory
|
15 |
+
- Construction
|
16 |
+
- Logistics
|
17 |
+
- Utilities
|
18 |
+
- Damage Risk
|
19 |
+
- Ppe
|
20 |
+
---
|
21 |
+
|
22 |
+
<div align="center">
|
23 |
+
<img width="640" alt="keremberke/hard-hat-detection" src="https://huggingface.co/datasets/keremberke/hard-hat-detection/resolve/main/thumbnail.jpg">
|
24 |
+
</div>
|
25 |
+
|
26 |
+
### Dataset Labels
|
27 |
+
|
28 |
+
```
|
29 |
+
['hardhat', 'no-hardhat']
|
30 |
+
```
|
31 |
+
|
32 |
+
|
33 |
+
### Number of Images
|
34 |
+
|
35 |
+
```json
|
36 |
+
{'test': 2001, 'train': 13782, 'valid': 3962}
|
37 |
+
```
|
38 |
+
|
39 |
+
|
40 |
+
### How to Use
|
41 |
+
|
42 |
+
- Install [datasets](https://pypi.org/project/datasets/):
|
43 |
+
|
44 |
+
```bash
|
45 |
+
pip install datasets
|
46 |
+
```
|
47 |
+
|
48 |
+
- Load the dataset:
|
49 |
+
|
50 |
+
```python
|
51 |
+
from datasets import load_dataset
|
52 |
+
|
53 |
+
ds = load_dataset("keremberke/hard-hat-detection", name="full")
|
54 |
+
example = ds['train'][0]
|
55 |
+
```
|
56 |
+
|
57 |
+
### Roboflow Dataset Page
|
58 |
+
[https://universe.roboflow.com/roboflow-universe-projects/hard-hats-fhbh5/dataset/2](https://universe.roboflow.com/roboflow-universe-projects/hard-hats-fhbh5/dataset/2?ref=roboflow2huggingface)
|
59 |
+
|
60 |
+
### Citation
|
61 |
+
|
62 |
+
```
|
63 |
+
@misc{ hard-hats-fhbh5_dataset,
|
64 |
+
title = { Hard Hats Dataset },
|
65 |
+
type = { Open Source Dataset },
|
66 |
+
author = { Roboflow Universe Projects },
|
67 |
+
howpublished = { \\url{ https://universe.roboflow.com/roboflow-universe-projects/hard-hats-fhbh5 } },
|
68 |
+
url = { https://universe.roboflow.com/roboflow-universe-projects/hard-hats-fhbh5 },
|
69 |
+
journal = { Roboflow Universe },
|
70 |
+
publisher = { Roboflow },
|
71 |
+
year = { 2022 },
|
72 |
+
month = { dec },
|
73 |
+
note = { visited on 2023-01-16 },
|
74 |
+
}
|
75 |
+
```
|
76 |
+
|
77 |
+
### License
|
78 |
+
CC BY 4.0
|
79 |
+
|
80 |
+
### Dataset Summary
|
81 |
+
This dataset was exported via roboflow.com on January 16, 2023 at 9:17 PM GMT
|
82 |
+
|
83 |
+
Roboflow is an end-to-end computer vision platform that helps you
|
84 |
+
* collaborate with your team on computer vision projects
|
85 |
+
* collect & organize images
|
86 |
+
* understand and search unstructured image data
|
87 |
+
* annotate, and create datasets
|
88 |
+
* export, train, and deploy computer vision models
|
89 |
+
* use active learning to improve your dataset over time
|
90 |
+
|
91 |
+
For state of the art Computer Vision training notebooks you can use with this dataset,
|
92 |
+
visit https://github.com/roboflow/notebooks
|
93 |
+
|
94 |
+
To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
|
95 |
+
|
96 |
+
The dataset includes 19745 images.
|
97 |
+
Hardhat-ppe are annotated in COCO format.
|
98 |
+
|
99 |
+
The following pre-processing was applied to each image:
|
100 |
+
* Auto-orientation of pixel data (with EXIF-orientation stripping)
|
101 |
+
* Resize to 640x640 (Stretch)
|
102 |
+
|
103 |
+
No image augmentation techniques were applied.
|
104 |
+
|
105 |
+
|
106 |
+
|
README.roboflow.txt
ADDED
@@ -0,0 +1,29 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
Hard Hats - v2 resized640_noAugmentation-FAST
|
3 |
+
==============================
|
4 |
+
|
5 |
+
This dataset was exported via roboflow.com on January 16, 2023 at 9:17 PM GMT
|
6 |
+
|
7 |
+
Roboflow is an end-to-end computer vision platform that helps you
|
8 |
+
* collaborate with your team on computer vision projects
|
9 |
+
* collect & organize images
|
10 |
+
* understand and search unstructured image data
|
11 |
+
* annotate, and create datasets
|
12 |
+
* export, train, and deploy computer vision models
|
13 |
+
* use active learning to improve your dataset over time
|
14 |
+
|
15 |
+
For state of the art Computer Vision training notebooks you can use with this dataset,
|
16 |
+
visit https://github.com/roboflow/notebooks
|
17 |
+
|
18 |
+
To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
|
19 |
+
|
20 |
+
The dataset includes 19745 images.
|
21 |
+
Hardhat-ppe are annotated in COCO format.
|
22 |
+
|
23 |
+
The following pre-processing was applied to each image:
|
24 |
+
* Auto-orientation of pixel data (with EXIF-orientation stripping)
|
25 |
+
* Resize to 640x640 (Stretch)
|
26 |
+
|
27 |
+
No image augmentation techniques were applied.
|
28 |
+
|
29 |
+
|
data/test.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e21ca22f4d69411af6e7c00b6d00ea30252867ac2d0f7cbef894b6ba9b9204c3
|
3 |
+
size 114425180
|
data/train.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:091b305ae044b652403ab809eae0c727551ae9bce66024f65a665cd1470126fd
|
3 |
+
size 778928861
|
data/valid-mini.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:cc44c6d3c70c51cf594f4172a1e83e20aab12cab950104298e4b95a923e3551e
|
3 |
+
size 209798
|
data/valid.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3d39f51b1e9cd968b399d1506f014d551ff5cb572d27b076064bfb8458cb6833
|
3 |
+
size 224860587
|
hard-hat-detection.py
ADDED
@@ -0,0 +1,152 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import collections
|
2 |
+
import json
|
3 |
+
import os
|
4 |
+
|
5 |
+
import datasets
|
6 |
+
|
7 |
+
|
8 |
+
_HOMEPAGE = "https://universe.roboflow.com/roboflow-universe-projects/hard-hats-fhbh5/dataset/2"
|
9 |
+
_LICENSE = "CC BY 4.0"
|
10 |
+
_CITATION = """\
|
11 |
+
@misc{ hard-hats-fhbh5_dataset,
|
12 |
+
title = { Hard Hats Dataset },
|
13 |
+
type = { Open Source Dataset },
|
14 |
+
author = { Roboflow Universe Projects },
|
15 |
+
howpublished = { \\url{ https://universe.roboflow.com/roboflow-universe-projects/hard-hats-fhbh5 } },
|
16 |
+
url = { https://universe.roboflow.com/roboflow-universe-projects/hard-hats-fhbh5 },
|
17 |
+
journal = { Roboflow Universe },
|
18 |
+
publisher = { Roboflow },
|
19 |
+
year = { 2022 },
|
20 |
+
month = { dec },
|
21 |
+
note = { visited on 2023-01-16 },
|
22 |
+
}
|
23 |
+
"""
|
24 |
+
_CATEGORIES = ['hardhat', 'no-hardhat']
|
25 |
+
_ANNOTATION_FILENAME = "_annotations.coco.json"
|
26 |
+
|
27 |
+
|
28 |
+
class HARDHATDETECTIONConfig(datasets.BuilderConfig):
|
29 |
+
"""Builder Config for hard-hat-detection"""
|
30 |
+
|
31 |
+
def __init__(self, data_urls, **kwargs):
|
32 |
+
"""
|
33 |
+
BuilderConfig for hard-hat-detection.
|
34 |
+
|
35 |
+
Args:
|
36 |
+
data_urls: `dict`, name to url to download the zip file from.
|
37 |
+
**kwargs: keyword arguments forwarded to super.
|
38 |
+
"""
|
39 |
+
super(HARDHATDETECTIONConfig, self).__init__(version=datasets.Version("1.0.0"), **kwargs)
|
40 |
+
self.data_urls = data_urls
|
41 |
+
|
42 |
+
|
43 |
+
class HARDHATDETECTION(datasets.GeneratorBasedBuilder):
|
44 |
+
"""hard-hat-detection object detection dataset"""
|
45 |
+
|
46 |
+
VERSION = datasets.Version("1.0.0")
|
47 |
+
BUILDER_CONFIGS = [
|
48 |
+
HARDHATDETECTIONConfig(
|
49 |
+
name="full",
|
50 |
+
description="Full version of hard-hat-detection dataset.",
|
51 |
+
data_urls={
|
52 |
+
"train": "https://huggingface.co/datasets/keremberke/hard-hat-detection/resolve/main/data/train.zip",
|
53 |
+
"validation": "https://huggingface.co/datasets/keremberke/hard-hat-detection/resolve/main/data/valid.zip",
|
54 |
+
"test": "https://huggingface.co/datasets/keremberke/hard-hat-detection/resolve/main/data/test.zip",
|
55 |
+
},
|
56 |
+
),
|
57 |
+
HARDHATDETECTIONConfig(
|
58 |
+
name="mini",
|
59 |
+
description="Mini version of hard-hat-detection dataset.",
|
60 |
+
data_urls={
|
61 |
+
"train": "https://huggingface.co/datasets/keremberke/hard-hat-detection/resolve/main/data/valid-mini.zip",
|
62 |
+
"validation": "https://huggingface.co/datasets/keremberke/hard-hat-detection/resolve/main/data/valid-mini.zip",
|
63 |
+
"test": "https://huggingface.co/datasets/keremberke/hard-hat-detection/resolve/main/data/valid-mini.zip",
|
64 |
+
},
|
65 |
+
)
|
66 |
+
]
|
67 |
+
|
68 |
+
def _info(self):
|
69 |
+
features = datasets.Features(
|
70 |
+
{
|
71 |
+
"image_id": datasets.Value("int64"),
|
72 |
+
"image": datasets.Image(),
|
73 |
+
"width": datasets.Value("int32"),
|
74 |
+
"height": datasets.Value("int32"),
|
75 |
+
"objects": datasets.Sequence(
|
76 |
+
{
|
77 |
+
"id": datasets.Value("int64"),
|
78 |
+
"area": datasets.Value("int64"),
|
79 |
+
"bbox": datasets.Sequence(datasets.Value("float32"), length=4),
|
80 |
+
"category": datasets.ClassLabel(names=_CATEGORIES),
|
81 |
+
}
|
82 |
+
),
|
83 |
+
}
|
84 |
+
)
|
85 |
+
return datasets.DatasetInfo(
|
86 |
+
features=features,
|
87 |
+
homepage=_HOMEPAGE,
|
88 |
+
citation=_CITATION,
|
89 |
+
license=_LICENSE,
|
90 |
+
)
|
91 |
+
|
92 |
+
def _split_generators(self, dl_manager):
|
93 |
+
data_files = dl_manager.download_and_extract(self.config.data_urls)
|
94 |
+
return [
|
95 |
+
datasets.SplitGenerator(
|
96 |
+
name=datasets.Split.TRAIN,
|
97 |
+
gen_kwargs={
|
98 |
+
"folder_dir": data_files["train"],
|
99 |
+
},
|
100 |
+
),
|
101 |
+
datasets.SplitGenerator(
|
102 |
+
name=datasets.Split.VALIDATION,
|
103 |
+
gen_kwargs={
|
104 |
+
"folder_dir": data_files["validation"],
|
105 |
+
},
|
106 |
+
),
|
107 |
+
datasets.SplitGenerator(
|
108 |
+
name=datasets.Split.TEST,
|
109 |
+
gen_kwargs={
|
110 |
+
"folder_dir": data_files["test"],
|
111 |
+
},
|
112 |
+
),
|
113 |
+
]
|
114 |
+
|
115 |
+
def _generate_examples(self, folder_dir):
|
116 |
+
def process_annot(annot, category_id_to_category):
|
117 |
+
return {
|
118 |
+
"id": annot["id"],
|
119 |
+
"area": annot["area"],
|
120 |
+
"bbox": annot["bbox"],
|
121 |
+
"category": category_id_to_category[annot["category_id"]],
|
122 |
+
}
|
123 |
+
|
124 |
+
image_id_to_image = {}
|
125 |
+
idx = 0
|
126 |
+
|
127 |
+
annotation_filepath = os.path.join(folder_dir, _ANNOTATION_FILENAME)
|
128 |
+
with open(annotation_filepath, "r") as f:
|
129 |
+
annotations = json.load(f)
|
130 |
+
category_id_to_category = {category["id"]: category["name"] for category in annotations["categories"]}
|
131 |
+
image_id_to_annotations = collections.defaultdict(list)
|
132 |
+
for annot in annotations["annotations"]:
|
133 |
+
image_id_to_annotations[annot["image_id"]].append(annot)
|
134 |
+
filename_to_image = {image["file_name"]: image for image in annotations["images"]}
|
135 |
+
|
136 |
+
for filename in os.listdir(folder_dir):
|
137 |
+
filepath = os.path.join(folder_dir, filename)
|
138 |
+
if filename in filename_to_image:
|
139 |
+
image = filename_to_image[filename]
|
140 |
+
objects = [
|
141 |
+
process_annot(annot, category_id_to_category) for annot in image_id_to_annotations[image["id"]]
|
142 |
+
]
|
143 |
+
with open(filepath, "rb") as f:
|
144 |
+
image_bytes = f.read()
|
145 |
+
yield idx, {
|
146 |
+
"image_id": image["id"],
|
147 |
+
"image": {"path": filepath, "bytes": image_bytes},
|
148 |
+
"width": image["width"],
|
149 |
+
"height": image["height"],
|
150 |
+
"objects": objects,
|
151 |
+
}
|
152 |
+
idx += 1
|
split_name_to_num_samples.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"test": 2001, "train": 13782, "valid": 3962}
|
thumbnail.jpg
ADDED
Git LFS Details
|