Datasets:
rgautroncgiar
commited on
Commit
•
ce04c8c
1
Parent(s):
7e31d0f
commiting to fix merge conflict
Browse files- .gitignore +5 -0
- README.md +89 -0
- _classes.txt +5 -0
- _pictures_to_exclude.txt +2 -0
- annotation_guide.html +13 -0
- data/train.zip +3 -0
- data/val.zip +3 -0
- images/annotated_1688033955437.jpg +3 -0
- images/train_counts.png +3 -0
- images/val_counts.png +3 -0
- scripts/label_training_images.py +110 -0
- scripts/requirements.txt +3 -0
.gitignore
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
**/train/
|
2 |
+
**/val/
|
3 |
+
_*/
|
4 |
+
**/_*.py
|
5 |
+
**.json
|
README.md
ADDED
@@ -0,0 +1,89 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
annotations_creators:
|
3 |
+
- CIAT (International Center for Tropical Agriculture)
|
4 |
+
- Producers Direct
|
5 |
+
task_categories:
|
6 |
+
- object-detection
|
7 |
+
size_categories:
|
8 |
+
- 1K<n<10K
|
9 |
+
pretty_name: Croppie coffee uganda
|
10 |
+
tags:
|
11 |
+
- yield estimates
|
12 |
+
- cherry count
|
13 |
+
- coffee cherries
|
14 |
+
- coffee trees
|
15 |
+
- arabica
|
16 |
+
- robusta
|
17 |
+
- digital agriculture
|
18 |
+
language:
|
19 |
+
- en
|
20 |
+
configs:
|
21 |
+
- config_name: default
|
22 |
+
data_files:
|
23 |
+
- split: train
|
24 |
+
path: "data/train.zip"
|
25 |
+
- split: val
|
26 |
+
path: "data/val.zip"
|
27 |
+
---
|
28 |
+
|
29 |
+
# Croppie training datasets
|
30 |
+
## General information
|
31 |
+
Croppie dataset for machine-vision assisted coffee cherry detection. The dataset is made of a mix of Arabica and Robusta coffee tree parts (with and without a background isolation element) with individual bounding boxes around all coffee cherries.
|
32 |
+
|
33 |
+
The original dataset is composed of 633 images with about 61 050 unique bounding boxes over coffee cherries in YOLO format. This original dataset has been processed to cut-down all images into 480 x 640 size pieces and the full original image downscaled to 480 x 640. We provide the processed dataset with Python scripts that allow easy visualization of the annotated dataset.
|
34 |
+
|
35 |
+
Coffee cherries of more than 10mm (following the longitudinal axis) are annotated according to their color:
|
36 |
+
- green
|
37 |
+
- yellow
|
38 |
+
- red
|
39 |
+
- dark brown (overripe/dry cherries)
|
40 |
+
- an extra class indicates low visibility/unsure label appreciation.
|
41 |
+
|
42 |
+
Here is an example of an annotated image:
|
43 |
+
![plot](./images/annotated_1688033955437.jpg)
|
44 |
+
|
45 |
+
## Data structure
|
46 |
+
This repository has the following structure:
|
47 |
+
```
|
48 |
+
.
|
49 |
+
├── annotation_guide.html # original annotation instructions
|
50 |
+
├── classes.json # json to convert numerical classes into the cherry type
|
51 |
+
├── data
|
52 |
+
│ ├── train.zip
|
53 |
+
│ └── val.zip
|
54 |
+
├── images
|
55 |
+
│ ├── annotated_1688033955437.jpg
|
56 |
+
│ ├── train_counts.png
|
57 |
+
│ └── val_counts.png
|
58 |
+
├── README.md
|
59 |
+
└── scripts # script for easy visualization of the annotated data
|
60 |
+
├── label_training_images.py
|
61 |
+
└── requirements.txt
|
62 |
+
```
|
63 |
+
### Dataset information
|
64 |
+
Each numerical class corresponds to the following cherry type:
|
65 |
+
```
|
66 |
+
{0: "dark_brown_cherry", 1: "green_cherry", 2: "low_visibility_unsure", 3: "red_cherry", 4: "yellow_cherry"}
|
67 |
+
```
|
68 |
+
|
69 |
+
* ```train```:
|
70 |
+
* Training dataset
|
71 |
+
* 5 836 annotated images
|
72 |
+
* YOLO format
|
73 |
+
|
74 |
+
![plot](./images/train_counts.png)
|
75 |
+
|
76 |
+
* ```val```:
|
77 |
+
* Validation dataset
|
78 |
+
* 2 497 annotated images
|
79 |
+
* YOLO format
|
80 |
+
|
81 |
+
![plot](./images/val_counts.png)
|
82 |
+
|
83 |
+
* ```annotation_guide.html```: instructions provided to label the images for cherry detection
|
84 |
+
|
85 |
+
## Scripts
|
86 |
+
The script ```label_training_images.py``` allows to label the images of the datasets and saves them in a folder ```./_labelled_dataset_images```.
|
87 |
+
Assuming you are in the ```scripts``` folder, first
|
88 |
+
run ```pip3 install -r requirements.txt``` if required package are not installed. After that, simply run
|
89 |
+
```python3 label_training_images.py```
|
_classes.txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
dark_brown_cherry
|
2 |
+
green_cherry
|
3 |
+
low_visibility_unsure
|
4 |
+
red_cherry
|
5 |
+
yellow_cherry
|
_pictures_to_exclude.txt
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
1688117174257
|
2 |
+
1688718305705
|
annotation_guide.html
ADDED
@@ -0,0 +1,13 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<ul>
|
2 |
+
<li><b>EVERY CHERRY</b> should be annotated with an <b>INDIVIDUAL bounding box</b>. Try not to miss any cherry, <b>even if they are located in the BACKGROUND</b>.</li>
|
3 |
+
<li>Each label should correspond to the <b>color of the cherry</b> (green, yellow, red or dark brown). Do not confond exposure and color: a green cherry with low exposure can appear dark</li>
|
4 |
+
<li>If you can see <b>AT LEAST 10% OF A CHERRY, annotate it</b>. For <b>all partially visible cherries</b>, guess the occulted part and <b>match BOTH the occulted and visible part with the same box</b>.</li>
|
5 |
+
<li>If you feel like it is a cherry, but you're <b>unsure</b>, use a "low_visibility_unsure" [blue] bounding box. <b>CAUTION</b>: a cherry can be partially occulted but still having good visibility.</li>
|
6 |
+
<li>Each bounding box should fit the cherry <b>AS CLOSELY AS POSSIBLE</b>.</li>
|
7 |
+
</ul>
|
8 |
+
TIPS:
|
9 |
+
<ul>
|
10 |
+
<li>using the sidebar, you can zoom, change the contrast or staturation of the image to ease the annotation</li>
|
11 |
+
<li>you can select several labels keep pressing "ctrl" and then delete label(s) using the "backspace" key</li>
|
12 |
+
</ul>
|
13 |
+
<img src="https://i.ibb.co/KNYsbSF/annotation-example-2.png" alt="annotation-example-2" height="350px">
|
data/train.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e881328f07950cfbe23b30bd9e7e3f661003589b3b30d256e176d0b15b2b86ba
|
3 |
+
size 651895390
|
data/val.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2b517afde1dd0f5cbe6322ec38665b5d4cbdcb89d54c1c7f36f87fe995d8d35c
|
3 |
+
size 278569732
|
images/annotated_1688033955437.jpg
ADDED
Git LFS Details
|
images/train_counts.png
ADDED
Git LFS Details
|
images/val_counts.png
ADDED
Git LFS Details
|
scripts/label_training_images.py
ADDED
@@ -0,0 +1,110 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import cv2
|
2 |
+
import matplotlib.pyplot as plt
|
3 |
+
from matplotlib.path import Path
|
4 |
+
plt.rcParams['figure.dpi'] = 100
|
5 |
+
from PIL import ImageColor
|
6 |
+
from pathlib import Path
|
7 |
+
import glob
|
8 |
+
import os
|
9 |
+
import json
|
10 |
+
|
11 |
+
def annotate_images_dataset(image_folder, label_folder, class_file_path, saving_folder, hex_class_colors=None, show=False):
|
12 |
+
"""
|
13 |
+
Allows to visualize a set of images and corresponding YOLO labels.
|
14 |
+
Args:
|
15 |
+
image_folder (str): path of the folder containing the images for object detection
|
16 |
+
label_folder (str): path of the folder containing the labels corresponding to the images for object detection
|
17 |
+
class_file_path (str): path of the json file containing the labels of the object classes
|
18 |
+
saving_folder (str): path of the folder
|
19 |
+
hex_class_colors (dict, optional): dictionary with HEX color for each label
|
20 |
+
show (bool, optional): if True, a prompt with the labelled image opens
|
21 |
+
"""
|
22 |
+
class_dic = get_class_dic(class_file_path)
|
23 |
+
Path(saving_folder).mkdir(parents=True, exist_ok=True)
|
24 |
+
if not hex_class_colors:
|
25 |
+
hex_class_colors = {class_name: (255, 0, 0) for class_name in class_dic.values()}
|
26 |
+
color_map = {key: ImageColor.getcolor(hex_class_colors[class_dic[key]], 'RGB') for key in [*class_dic]}
|
27 |
+
label_paths = sorted(glob.glob(os.path.join(label_folder, '*')))
|
28 |
+
n_labels = len(label_paths)
|
29 |
+
for i, label_path in enumerate(label_paths):
|
30 |
+
i += 1
|
31 |
+
if i % 100 == 0:
|
32 |
+
progress = i / n_labels
|
33 |
+
print(f'{progress: .0%} -> image {i} out of {n_labels}')
|
34 |
+
file_name = str(label_path).split('/')[-1].split('.')[0]
|
35 |
+
image_file = file_name + '.jpg'
|
36 |
+
image_path = os.path.join(image_folder, image_file)
|
37 |
+
if os.path.isfile(image_path):
|
38 |
+
img = cv2.imread(image_path)
|
39 |
+
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
|
40 |
+
dh, dw, _ = img.shape
|
41 |
+
|
42 |
+
with open(label_path, 'r') as f:
|
43 |
+
data = f.readlines()
|
44 |
+
|
45 |
+
for yolo_box in data:
|
46 |
+
yolo_box = yolo_box.strip()
|
47 |
+
c, x, y, w, h = map(float, yolo_box.split(' '))
|
48 |
+
l = int((x - w / 2) * dw)
|
49 |
+
r = int((x + w / 2) * dw)
|
50 |
+
t = int((y - h / 2) * dh)
|
51 |
+
b = int((y + h / 2) * dh)
|
52 |
+
|
53 |
+
if l < 0:
|
54 |
+
l = 0
|
55 |
+
if r > dw - 1:
|
56 |
+
r = dw - 1
|
57 |
+
if t < 0:
|
58 |
+
t = 0
|
59 |
+
if b > dh - 1:
|
60 |
+
b = dh - 1
|
61 |
+
cv2.rectangle(img, (l, t), (r, b), color_map[c], 3)
|
62 |
+
if show:
|
63 |
+
plt.imshow(img)
|
64 |
+
plt.show()
|
65 |
+
plt.imsave(os.path.join(saving_folder, f'annotated_{image_file}'), img)
|
66 |
+
else:
|
67 |
+
print(f'WARNING: {image_path} does not exists')
|
68 |
+
|
69 |
+
|
70 |
+
def get_class_dic(classe_file_path):
|
71 |
+
"""
|
72 |
+
Turns a label list txt file into a dict with numerical class as key and corresponding label as value
|
73 |
+
Args:
|
74 |
+
classe_file_path (str): path to the json file listing the labels
|
75 |
+
|
76 |
+
Returns:
|
77 |
+
dict: dictionary of numerical class as key and corresponding label as value
|
78 |
+
"""
|
79 |
+
class_dic = {}
|
80 |
+
with open(classe_file_path) as f:
|
81 |
+
class_dic = json.load(f)
|
82 |
+
class_dic = {int(k):v for k,v in class_dic.items()}
|
83 |
+
return class_dic
|
84 |
+
|
85 |
+
|
86 |
+
if __name__ == '__main__':
|
87 |
+
dataset_folder_names = ['train', 'val']
|
88 |
+
dataset_prefix_folder = '../data'
|
89 |
+
saving_prefix_folder = '../_labelled_dataset_images'
|
90 |
+
show = not True
|
91 |
+
hex_class_colors = {'green_cherry': '#9CF09A',
|
92 |
+
'yellow_cherry': '#F3C63D',
|
93 |
+
'red_cherry': '#F44336',
|
94 |
+
'dark_brown_cherry': '#C36105',
|
95 |
+
'low_visibility_unsure': '#02D5FA'}
|
96 |
+
for dataset_folder_name in dataset_folder_names:
|
97 |
+
print(f'dataset: {dataset_folder_name}:\n')
|
98 |
+
full_saving_folder = os.path.join(saving_prefix_folder, dataset_folder_name)
|
99 |
+
full_dataset_folder = os.path.join(dataset_prefix_folder, dataset_folder_name)
|
100 |
+
class_file_path = os.path.join('../', 'classes.json')
|
101 |
+
image_folder = os.path.join(full_dataset_folder, 'images')
|
102 |
+
label_folder = os.path.join(full_dataset_folder, 'labels')
|
103 |
+
annotate_images_dataset(
|
104 |
+
image_folder=image_folder,
|
105 |
+
label_folder=label_folder,
|
106 |
+
class_file_path=class_file_path,
|
107 |
+
saving_folder=full_saving_folder,
|
108 |
+
hex_class_colors=hex_class_colors,
|
109 |
+
show=show,
|
110 |
+
)
|
scripts/requirements.txt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
Pillow==9.3.0
|
2 |
+
matplotlib==3.7.1
|
3 |
+
opencv-python==4.7.0.72
|