Update README.md
Browse files
README.md
CHANGED
@@ -16,101 +16,5 @@ task_categories:
|
|
16 |
download_size: 900G
|
17 |
---
|
18 |
|
19 |
-
|
20 |
-
|
21 |
-
## Dataset Description
|
22 |
-
|
23 |
-
- **Homepage:** https://annonymous2023neuripsdataset.github.io/
|
24 |
-
- **Repository:** N/A for now.
|
25 |
-
- **Paper:** N/A for now.
|
26 |
-
- **Leaderboard:** N/A for now.
|
27 |
-
- **Point of Contact:** lic032@ucsd.edu
|
28 |
-
|
29 |
-
### Dataset Summary
|
30 |
-
|
31 |
-
Our dataset comprises 64 objects, each captured from 70 views, under 13 lighting patterns and 142 One-Light-At-Time (OLAT) illumination, respectively.
|
32 |
-
The 70 views are captured by 48 DSLR cameras and 22 high-speed cameras.
|
33 |
-
|
34 |
-
### Supported Tasks
|
35 |
-
|
36 |
-
* Novel view synthesis: The dataset can be used to evaluate NVS methods, such as NeRF, TensoRF, and NeuS.
|
37 |
-
* Inverse rendering: The dataset can be used to evaluate inverse rendering algorithms, which is to decompose illumination, object geometry, and object materials.
|
38 |
-
|
39 |
-
### Dataset Download
|
40 |
-
|
41 |
-
Since the whole dataset is very large, we provide a script [here](https://huggingface.co/datasets/fsky097/OpenIllumination/blob/main/open_illumination.py]) to download according to the illumination type (lighting pattern or OLAT) and the object ID. You can also modify the code to customize according to your requirements.
|
42 |
-
|
43 |
-
### Languages
|
44 |
-
|
45 |
-
English
|
46 |
-
|
47 |
-
## Dataset Structure
|
48 |
-
|
49 |
-
### Data Fields
|
50 |
-
|
51 |
-
For each image, the following fields are provided:
|
52 |
-
|
53 |
-
* file_path: str, the file path to an image.
|
54 |
-
* light_idx: int, the index of illuminations, from 1 to 13 for lighting patterns, or from 0 to 141 for OLAT.
|
55 |
-
* transform_matrix: list, a 4x4 matrix, representing the camera pose for this image (in OpenCV convention).
|
56 |
-
* camera_angle_x: float, can be used to compute the corresponding camera intrinsics.
|
57 |
-
* obj_mask: the object mask, can be read by ```imageio.imread(OBJ_MASK_PATH)>0```, used for PSNR evaluation.
|
58 |
-
* com_mask (optional): the union of the object mask and the support mask, can be read by ```imageio.imread(COM_MASK_PATH)>0```, used for t
|
59 |
-
|
60 |
-
### Data Splits
|
61 |
-
|
62 |
-
The data is split into training and testing views. For each object captured under 13 lighting patterns, the training set and the testing set contain 38 and 10 views, respectively. For each object captured under OLAT, the training set and the testing set contain 17 and 5 views, respectively.
|
63 |
-
|
64 |
-
## Dataset Creation
|
65 |
-
|
66 |
-
### Curation Rationale
|
67 |
-
|
68 |
-
From the paper:
|
69 |
-
|
70 |
-
> Recent efforts have introduced some datasets that incorporate multiple illuminations in real-world settings. However, most of them are limited either in the number of views or the number of illuminations; few of them provide object-level data as well. Consequently, these existing datasets prove unsuitable for evaluating inverse rendering methods on real-world objects.
|
71 |
-
>
|
72 |
-
>
|
73 |
-
> To address this, we present a new dataset containing objects with a variety of materials, captured under multiple views and illuminations, allowing for reliable evaluation of various inverse rendering tasks with real data.
|
74 |
-
|
75 |
-
### Source Data
|
76 |
-
|
77 |
-
#### Initial Data Collection and Normalization
|
78 |
-
|
79 |
-
From the paper:
|
80 |
-
|
81 |
-
> Our dataset was acquired using a setup similar to a traditional light stage, where densely distributed cameras and controllable lights are attached to a static frame around a central platform.
|
82 |
-
|
83 |
-
### Annotations
|
84 |
-
|
85 |
-
#### Annotation process
|
86 |
-
|
87 |
-
From the paper:
|
88 |
-
|
89 |
-
> To obtain high-quality segmentation masks, we propose to use Segment-Anything (SAM) to perform instance segmentation. However,
|
90 |
-
> we find that the performance is not satisfactory. One reason is that the object categories are highly undefined. In this case, even combining the bounding box and point prompts cannot produce satisfactory results. To address this problem, we propose to use multiple bounding-box prompts to perform segmentation for each possible part and then calculate a union of the masks as the final object mask.
|
91 |
-
>
|
92 |
-
> For objects with very detailed and thin structures, e.g. hair, we use an off-the-shelf background matting method to perform object segmentation.
|
93 |
-
|
94 |
-
#### Who are the annotators?
|
95 |
-
|
96 |
-
Linghao Chen, Isabella Liu, and Ziyang Fu.
|
97 |
-
|
98 |
-
## Additional Information
|
99 |
-
|
100 |
-
### Dataset Curators
|
101 |
-
|
102 |
-
Isabella Liu, Linghao Chen, Ziyang Fu, Liwen Wu, Haian Jin, Zhong Li, Chin Ming Ryan Wong, Yi Xu, Ravi Ramamoorthi, Zexiang Xu and Hao Su
|
103 |
-
|
104 |
-
### Licensing Information
|
105 |
-
|
106 |
-
Non-commercial use only????????
|
107 |
-
|
108 |
-
### Citation Information
|
109 |
-
|
110 |
-
```bash
|
111 |
-
@article{liu2023openillumination,
|
112 |
-
title={OpenIllumination: A Multi-Illumination Dateset for Inverse Rendering Evaluation on Real Objects},
|
113 |
-
author={Liu, Isabella and Chen, Linghao and Fu, Ziyang and Wu, Liwen and Jin, Haian and Li, Zhong and Chin Ming Ryan Wong3 and Xu, Yi and Ravi Ramamoorthi1 and Xu, Zexiang and Su, Hao},
|
114 |
-
year={2023}
|
115 |
-
}
|
116 |
-
```
|
|
|
16 |
download_size: 900G
|
17 |
---
|
18 |
|
19 |
+
!!!NOTE!!!
|
20 |
+
THIS REPO IS DEPRECATED! PLEASE VISIT [here](https://huggingface.co/datasets/OpenIllumination/OpenIllumination).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|