whyen-wang commited on
Commit
2f23893
1 Parent(s): ccc4a63
.gitignore ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ annotations/
2
+ annotations_trainval2017.zip
3
+ *.jsonl
README.md CHANGED
@@ -1,3 +1,212 @@
1
  ---
2
  license: cc-by-4.0
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-4.0
3
+ size_categories:
4
+ - n<1K
5
+ task_categories:
6
+ - object-detection
7
+ language:
8
+ - en
9
+ pretty_name: COCO Keypoints
10
  ---
11
+
12
+ # Dataset Card for "COCO Keypoints"
13
+
14
+ ## Quick Start
15
+ ### Usage
16
+ ```python
17
+ >>> from datasets.load import load_dataset
18
+
19
+ >>> dataset = load_dataset('whyen-wang/coco_keypoints')
20
+ >>> example = dataset['train'][0]
21
+ >>> print(example)
22
+ {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x360>,
23
+ 'bboxes': [
24
+ [339.8800048828125, 22.15999984741211,
25
+ 153.8800048828125, 300.7300109863281],
26
+ [471.6400146484375, 172.82000732421875,
27
+ 35.91999816894531, 48.099998474121094]],
28
+ 'keypoints': [[
29
+ [368, 61, 1], [369, 52, 2], [0, 0, 0], [382, 48, 2], [0, 0, 0],
30
+ [368, 84, 2], [435, 81, 2], [362, 125, 2], [446, 125, 2], [360, 153, 2],
31
+ [0, 0, 0], [397, 167, 1], [439, 166, 1], [369, 193, 2], [461, 234, 2],
32
+ [361, 246, 2], [474, 287, 2]
33
+ ], [[...]]
34
+ ]}
35
+ ```
36
+
37
+ ### Visualization
38
+ ```python
39
+ >>> import cv2
40
+ >>> import numpy as np
41
+ >>> from PIL import Image
42
+
43
+ >>> def visualize(example):
44
+ image = np.array(example['image'])
45
+ bboxes = np.array(example['bboxes']).round().astype(int)
46
+ bboxes[:, 2:] += bboxes[:, :2]
47
+ keypoints = example['keypoints']
48
+ n = len(bboxes)
49
+ for i in range(n):
50
+ color = (255, 0, 0)
51
+ cv2.rectangle(image, bboxes[i, :2], bboxes[i, 2:], color, 2)
52
+ ks = keypoints[i]
53
+ for k in ks:
54
+ if k[-1] == 2:
55
+ cv2.circle(
56
+ image, k[:2], 5, (0, 255, 0), 1
57
+ )
58
+ return image
59
+
60
+ >>> Image.fromarray(visualize(example))
61
+ ```
62
+
63
+ ## Table of Contents
64
+ - [Table of Contents](#table-of-contents)
65
+ - [Dataset Description](#dataset-description)
66
+ - [Dataset Summary](#dataset-summary)
67
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
68
+ - [Languages](#languages)
69
+ - [Dataset Structure](#dataset-structure)
70
+ - [Data Instances](#data-instances)
71
+ - [Data Fields](#data-fields)
72
+ - [Data Splits](#data-splits)
73
+ - [Dataset Creation](#dataset-creation)
74
+ - [Curation Rationale](#curation-rationale)
75
+ - [Source Data](#source-data)
76
+ - [Annotations](#annotations)
77
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
78
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
79
+ - [Social Impact of Dataset](#social-impact-of-dataset)
80
+ - [Discussion of Biases](#discussion-of-biases)
81
+ - [Other Known Limitations](#other-known-limitations)
82
+ - [Additional Information](#additional-information)
83
+ - [Dataset Curators](#dataset-curators)
84
+ - [Licensing Information](#licensing-information)
85
+ - [Citation Information](#citation-information)
86
+ - [Contributions](#contributions)
87
+
88
+ ## Dataset Description
89
+
90
+ - **Homepage:** https://cocodataset.org/
91
+ - **Repository:** None
92
+ - **Paper:** [Microsoft COCO: Common Objects in Context](https://arxiv.org/abs/1405.0312)
93
+ - **Leaderboard:** [Papers with Code](https://paperswithcode.com/dataset/coco)
94
+ - **Point of Contact:** None
95
+
96
+ ### Dataset Summary
97
+
98
+ COCO is a large-scale object detection, segmentation, and captioning dataset.
99
+
100
+ ### Supported Tasks and Leaderboards
101
+
102
+ [Object Detection](https://huggingface.co/tasks/object-detection)
103
+ [Image Segmentation](https://huggingface.co/tasks/image-segmentation)
104
+
105
+ ### Languages
106
+
107
+ en
108
+
109
+ ## Dataset Structure
110
+
111
+ ### Data Instances
112
+
113
+ An example looks as follows.
114
+
115
+ ```
116
+ {
117
+ "image": PIL.Image(mode="RGB"),
118
+ "captions": [
119
+ "Closeup of bins of food that include broccoli and bread.",
120
+ "A meal is presented in brightly colored plastic trays.",
121
+ "there are containers filled with different kinds of foods",
122
+ "Colorful dishes holding meat, vegetables, fruit, and bread.",
123
+ "A bunch of trays that have different food."
124
+ ]
125
+ }
126
+ ```
127
+
128
+ ### Data Fields
129
+
130
+ [More Information Needed]
131
+
132
+ ### Data Splits
133
+
134
+ | name | train | validation |
135
+ | ------- | -----: | ---------: |
136
+ | default | 64,115 | 2,693 |
137
+
138
+ ## Dataset Creation
139
+
140
+ ### Curation Rationale
141
+
142
+ [More Information Needed]
143
+
144
+ ### Source Data
145
+
146
+ #### Initial Data Collection and Normalization
147
+
148
+ [More Information Needed]
149
+
150
+ #### Who are the source language producers?
151
+
152
+ [More Information Needed]
153
+
154
+ ### Annotations
155
+
156
+ #### Annotation process
157
+
158
+ [More Information Needed]
159
+
160
+ #### Who are the annotators?
161
+
162
+ [More Information Needed]
163
+
164
+ ### Personal and Sensitive Information
165
+
166
+ [More Information Needed]
167
+
168
+ ## Considerations for Using the Data
169
+
170
+ ### Social Impact of Dataset
171
+
172
+ [More Information Needed]
173
+
174
+ ### Discussion of Biases
175
+
176
+ [More Information Needed]
177
+
178
+ ### Other Known Limitations
179
+
180
+ [More Information Needed]
181
+
182
+ ## Additional Information
183
+
184
+ ### Dataset Curators
185
+
186
+ [More Information Needed]
187
+
188
+ ### Licensing Information
189
+
190
+ Creative Commons Attribution 4.0 License
191
+
192
+ ### Citation Information
193
+
194
+ ```
195
+ @article{cocodataset,
196
+ author = {Tsung{-}Yi Lin and Michael Maire and Serge J. Belongie and Lubomir D. Bourdev and Ross B. Girshick and James Hays and Pietro Perona and Deva Ramanan and Piotr Doll{'{a} }r and C. Lawrence Zitnick},
197
+ title = {Microsoft {COCO:} Common Objects in Context},
198
+ journal = {CoRR},
199
+ volume = {abs/1405.0312},
200
+ year = {2014},
201
+ url = {http://arxiv.org/abs/1405.0312},
202
+ archivePrefix = {arXiv},
203
+ eprint = {1405.0312},
204
+ timestamp = {Mon, 13 Aug 2018 16:48:13 +0200},
205
+ biburl = {https://dblp.org/rec/bib/journals/corr/LinMBHPRDZ14},
206
+ bibsource = {dblp computer science bibliography, https://dblp.org}
207
+ }
208
+ ```
209
+
210
+ ### Contributions
211
+
212
+ Thanks to [@github-whyen-wang](https://github.com/whyen-wang) for adding this dataset.
coco_keypoints.py ADDED
@@ -0,0 +1,124 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ import datasets
3
+ from pathlib import Path
4
+
5
+ _HOMEPAGE = 'https://cocodataset.org/'
6
+ _LICENSE = 'Creative Commons Attribution 4.0 License'
7
+ _DESCRIPTION = 'COCO is a large-scale object detection, segmentation, and captioning dataset.'
8
+ _CITATION = '''\
9
+ @article{cocodataset,
10
+ author = {Tsung{-}Yi Lin and Michael Maire and Serge J. Belongie and Lubomir D. Bourdev and Ross B. Girshick and James Hays and Pietro Perona and Deva Ramanan and Piotr Doll{'{a} }r and C. Lawrence Zitnick},
11
+ title = {Microsoft {COCO:} Common Objects in Context},
12
+ journal = {CoRR},
13
+ volume = {abs/1405.0312},
14
+ year = {2014},
15
+ url = {http://arxiv.org/abs/1405.0312},
16
+ archivePrefix = {arXiv},
17
+ eprint = {1405.0312},
18
+ timestamp = {Mon, 13 Aug 2018 16:48:13 +0200},
19
+ biburl = {https://dblp.org/rec/bib/journals/corr/LinMBHPRDZ14},
20
+ bibsource = {dblp computer science bibliography, https://dblp.org}
21
+ }
22
+ '''
23
+
24
+
25
+ class COCOKeypointsConfig(datasets.BuilderConfig):
26
+ '''Builder Config for coco2017'''
27
+
28
+ def __init__(
29
+ self, description, homepage,
30
+ annotation_urls, **kwargs
31
+ ):
32
+ super(COCOKeypointsConfig, self).__init__(
33
+ version=datasets.Version('1.0.0', ''),
34
+ **kwargs
35
+ )
36
+ self.description = description
37
+ self.homepage = homepage
38
+ url = 'http://images.cocodataset.org/zips/'
39
+ self.train_image_url = url + 'train2017.zip'
40
+ self.val_image_url = url + 'val2017.zip'
41
+ self.train_annotation_urls = annotation_urls['train']
42
+ self.val_annotation_urls = annotation_urls['validation']
43
+
44
+
45
+ class COCOKeypoints(datasets.GeneratorBasedBuilder):
46
+ BUILDER_CONFIGS = [
47
+ COCOKeypointsConfig(
48
+ description=_DESCRIPTION,
49
+ homepage=_HOMEPAGE,
50
+ annotation_urls={
51
+ 'train': 'data/keypoints_train.zip',
52
+ 'validation': 'data/keypoints_validation.zip'
53
+ },
54
+ )
55
+ ]
56
+
57
+ def _info(self):
58
+ features = datasets.Features({
59
+ 'image': datasets.Image(mode='RGB', decode=True, id=None),
60
+ 'bboxes': datasets.Sequence(
61
+ feature=datasets.Sequence(
62
+ feature=datasets.Value(dtype='float32', id=None),
63
+ length=4, id=None
64
+ ), length=-1, id=None
65
+ ),
66
+ 'keypoints': datasets.Sequence(
67
+ feature=datasets.Sequence(
68
+ feature=datasets.Sequence(
69
+ feature=datasets.Value(dtype='int32', id=None),
70
+ ), length=17, id=None
71
+ ), length=-1, id=None
72
+ )
73
+ })
74
+ return datasets.DatasetInfo(
75
+ description=_DESCRIPTION,
76
+ features=features,
77
+ homepage=_HOMEPAGE,
78
+ license=_LICENSE,
79
+ citation=_CITATION
80
+ )
81
+
82
+ def _split_generators(self, dl_manager):
83
+ train_image_path = dl_manager.download_and_extract(
84
+ self.config.train_image_url
85
+ )
86
+ validation_image_path = dl_manager.download_and_extract(
87
+ self.config.val_image_url
88
+ )
89
+ train_annotation_paths = dl_manager.download_and_extract(
90
+ self.config.train_annotation_urls
91
+ )
92
+ val_annotation_paths = dl_manager.download_and_extract(
93
+ self.config.val_annotation_urls
94
+ )
95
+ return [
96
+ datasets.SplitGenerator(
97
+ name=datasets.Split.TRAIN,
98
+ gen_kwargs={
99
+ 'image_path': f'{train_image_path}/train2017',
100
+ 'annotation_path': f'{train_annotation_paths}/keypoints_train.jsonl'
101
+ }
102
+ ),
103
+ datasets.SplitGenerator(
104
+ name=datasets.Split.VALIDATION,
105
+ gen_kwargs={
106
+ 'image_path': f'{validation_image_path}/val2017',
107
+ 'annotation_path': f'{val_annotation_paths}/keypoints_validation.jsonl'
108
+ }
109
+ )
110
+ ]
111
+
112
+ def _generate_examples(self, image_path, annotation_path):
113
+ idx = 0
114
+ image_path = Path(image_path)
115
+ with open(annotation_path, 'r', encoding='utf-8') as f:
116
+ for line in f:
117
+ obj = json.loads(line.strip())
118
+ example = {
119
+ 'image': str(image_path / obj['image']),
120
+ 'bboxes': obj['bboxes'],
121
+ 'keypoints': obj['keypoints']
122
+ }
123
+ yield idx, example
124
+ idx += 1
data/instance_train.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8739c76e681f900923b900c9df0ef75cf421d39cabb54650c4b9ad19b6a76d85
3
+ size 22
data/keypoints_train.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e59c242ea0b1006f9a15c8781a0da691988ec9df08d8ab16244c4d19b1948601
3
+ size 12303366
data/keypoints_validation.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7929246036309bc9163a44dbcb802049764adfac181b18cb692b9d189260c954
3
+ size 515838
prepare.ipynb ADDED
@@ -0,0 +1,129 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "code",
5
+ "execution_count": null,
6
+ "metadata": {},
7
+ "outputs": [],
8
+ "source": [
9
+ "import os\n",
10
+ "import numpy as np\n",
11
+ "import zipfile\n",
12
+ "import requests\n",
13
+ "import jsonlines\n",
14
+ "from tqdm import tqdm\n",
15
+ "from pathlib import Path\n",
16
+ "from pycocotools.coco import COCO"
17
+ ]
18
+ },
19
+ {
20
+ "cell_type": "markdown",
21
+ "metadata": {},
22
+ "source": [
23
+ "# Download Annotations"
24
+ ]
25
+ },
26
+ {
27
+ "cell_type": "code",
28
+ "execution_count": null,
29
+ "metadata": {},
30
+ "outputs": [],
31
+ "source": [
32
+ "url = 'http://images.cocodataset.org/annotations/'\n",
33
+ "files = [\n",
34
+ " 'annotations_trainval2017.zip'\n",
35
+ "]\n",
36
+ "for file in files:\n",
37
+ " if not Path(f'./{file}').exists():\n",
38
+ " response = requests.get(url + file)\n",
39
+ " with open(file, 'wb') as f:\n",
40
+ " f.write(response.content)\n",
41
+ "\n",
42
+ " with zipfile.ZipFile(file, 'r') as zipf:\n",
43
+ " zipf.extractall(Path())\n"
44
+ ]
45
+ },
46
+ {
47
+ "cell_type": "markdown",
48
+ "metadata": {},
49
+ "source": [
50
+ "## Keypoint Detection Task"
51
+ ]
52
+ },
53
+ {
54
+ "cell_type": "code",
55
+ "execution_count": null,
56
+ "metadata": {},
57
+ "outputs": [],
58
+ "source": [
59
+ "train_data = COCO('annotations/person_keypoints_train2017.json')\n",
60
+ "val_data = COCO('annotations/person_keypoints_val2017.json')"
61
+ ]
62
+ },
63
+ {
64
+ "cell_type": "code",
65
+ "execution_count": null,
66
+ "metadata": {},
67
+ "outputs": [],
68
+ "source": [
69
+ "for split, data in zip(['train', 'validation'], [train_data, val_data]):\n",
70
+ " with jsonlines.open(f'data/keypoints_{split}.jsonl', mode='w') as writer:\n",
71
+ " for image_id, image_info in tqdm(data.imgs.items()):\n",
72
+ " bboxes, keypoints = [], []\n",
73
+ " anns = data.imgToAnns[image_id]\n",
74
+ " if len(anns) > 0:\n",
75
+ " \n",
76
+ " for ann in anns:\n",
77
+ " bboxes.append(ann['bbox'])\n",
78
+ " keypoints.append(ann['keypoints'])\n",
79
+ " writer.write({\n",
80
+ " 'image': image_info['file_name'],\n",
81
+ " 'bboxes': bboxes,\n",
82
+ " 'keypoints': np.array(keypoints).reshape(\n",
83
+ " len(bboxes), -1, 3\n",
84
+ " ).tolist()\n",
85
+ " })"
86
+ ]
87
+ },
88
+ {
89
+ "cell_type": "code",
90
+ "execution_count": null,
91
+ "metadata": {},
92
+ "outputs": [],
93
+ "source": [
94
+ "for split in ['train', 'validation']:\n",
95
+ " file_path = f'data/keypoints_{split}.jsonl'\n",
96
+ " with zipfile.ZipFile(f'data/keypoints_{split}.zip', 'w', zipfile.ZIP_DEFLATED) as zipf:\n",
97
+ " zipf.write(file_path, os.path.basename(file_path))"
98
+ ]
99
+ },
100
+ {
101
+ "cell_type": "code",
102
+ "execution_count": null,
103
+ "metadata": {},
104
+ "outputs": [],
105
+ "source": []
106
+ }
107
+ ],
108
+ "metadata": {
109
+ "kernelspec": {
110
+ "display_name": ".venv",
111
+ "language": "python",
112
+ "name": "python3"
113
+ },
114
+ "language_info": {
115
+ "codemirror_mode": {
116
+ "name": "ipython",
117
+ "version": 3
118
+ },
119
+ "file_extension": ".py",
120
+ "mimetype": "text/x-python",
121
+ "name": "python",
122
+ "nbconvert_exporter": "python",
123
+ "pygments_lexer": "ipython3",
124
+ "version": "3.12.2"
125
+ }
126
+ },
127
+ "nbformat": 4,
128
+ "nbformat_minor": 2
129
+ }