Datasets:

Modalities:
Image
Languages:
English
ArXiv:
Libraries:
Datasets
License:
File size: 3,516 Bytes
58d9de7
 
03b1022
 
 
 
 
 
 
 
 
d5f4f17
03b1022
 
 
 
5e193c3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
03b1022
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
---
license: apache-2.0
language:
- en
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
task_categories:
- depth-estimation
task_ids:
- []
pretty_name: NYU Depth V2
tags: 
- depth-estimation
paperswithcode_id: nyuv2
dataset_info:
  features:
  - name: image
    dtype: image
  - name: depth_map
    dtype: image
  splits:
  - name: train
    num_bytes: 20212097551
    num_examples: 47584
  - name: validation
    num_bytes: 240785762
    num_examples: 654
  download_size: 35151124480
  dataset_size: 20452883313
---

# Dataset Card for MIT Scene Parsing Benchmark

## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
  - [Dataset Summary](#dataset-summary)
  - [Supported Tasks](#supported-tasks)
  - [Languages](#languages)
- [Dataset Structure](#dataset-structure)
  - [Data Instances](#data-instances)
  - [Data Fields](#data-fields)
  - [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
  - [Curation Rationale](#curation-rationale)
  - [Source Data](#source-data)
  - [Annotations](#annotations)
  - [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
  - [Social Impact of Dataset](#social-impact-of-dataset)
  - [Discussion of Biases](#discussion-of-biases)
  - [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
  - [Dataset Curators](#dataset-curators)
  - [Licensing Information](#licensing-information)
  - [Citation Information](#citation-information)
  - [Contributions](#contributions)


## Dataset Description

- **Homepage:** [NYU Depth Dataset V2 homepage](https://cs.nyu.edu/~silberman/datasets/nyu_depth_v2.html)
- **Repository:** Fast Depth [repository](https://github.com/dwofk/fast-depth) which was used to source the dataset in this repository. It is a preprocessed version of the original NYU Depth V2 dataset linked above. It is also used in [TensorFlow Datasets](https://www.tensorflow.org/datasets/catalog/nyu_depth_v2).
- **Paper:** [Indoor Segmentation and Support Inference from RGBD Images](http://cs.nyu.edu/~silberman/papers/indoor_seg_support.pdf) and [FastDepth: Fast Monocular Depth Estimation on Embedded Systems](https://arxiv.org/abs/1903.03273)
- **Point of Contact:** [Nathan Silberman](mailto:silberman@@cs.nyu.edu) and [Diana Wofk](mailto:dwofk@alum.mit.edu)

### Dataset Summary

As per the [dataset homepage](https://cs.nyu.edu/~silberman/datasets/nyu_depth_v2.html):

The NYU-Depth V2 data set is comprised of video sequences from a variety of indoor scenes as recorded by both the RGB and Depth cameras from the Microsoft [Kinect](http://www.xbox.com/kinect). It features:

* 1449 densely labeled pairs of aligned RGB and depth images
* 464 new scenes taken from 3 cities
* 407,024 new unlabeled frames
* Each object is labeled with a class and an instance number (cup1, cup2, cup3, etc)

The dataset has several components:

* Labeled: A subset of the video data accompanied by dense multi-class labels. This data has also been preprocessed to fill in missing depth labels.
* Raw: The raw rgb, depth and accelerometer data as provided by the Kinect.
* Toolbox: Useful functions for manipulating the data and labels.

### Supported Tasks

- `depth-estimation`: Depth estimation is the task of approximating the perceived depth of a given image. In other words, it's about measuring the distance of each image pixel from the camera.