File size: 5,823 Bytes
8d2815b e91156f 8d2815b e91156f 8d2815b e91156f 8d2815b e91156f 8d2815b ac1e664 8d2815b e91156f 8d2815b e91156f 8d2815b ac1e664 8d2815b ac1e664 8d2815b ac1e664 8d2815b ac1e664 8d2815b ac1e664 8d2815b ac1e664 8d2815b ac1e664 8d2815b ac1e664 8d2815b ac1e664 8d2815b ac1e664 8d2815b ac1e664 8d2815b e91156f 8d2815b e91156f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 |
---
license: other
license_name: cc-by-sa-and-odbl
license_link: LICENSE
language:
- en
pretty_name: Map It Anywhere
size_categories:
- 1M<n<10M
---
# Dataset Card for Map It Anywhere (MIA)
The Map It Anywhere (MIA) dataset contains map-prediction-ready data
curated from public datasets.
## Dataset Details
### Dataset Description
The Map It Anywhere (MIA) dataset contains 1.2 million high quality first-person-view (FPV) and bird's eye view (BEV) map pairs covering 470 squared km, thereby facilitating future map prediction
research on generalizability and robustness. The dataset is curated using the `MIA data engine` to sample from six urban-centered location: New York, Chicago, Houston, Los Angeles, Pittsburgh and San Francisco.
- **Curated by:** Airlab at CMU (Cherie Ho, Jiaye Zou, Omar Alama, Sai Mitheran Jagadesh Kumar, Benjamin Chiang, Taneesh Gupta, Chen Wang, Nikhil Keetha, Katia Sycara, Sebastian Scherer)
- **License:** The first-person-view images and the associated metadata of MIA dataset is published under CC-By-SA following Mapillary. The bird’s eye view map of MIA dataset is published under ODbL following OpenStreetMap.
### Dataset Sources
The MIA dataset is generated using the MIA data engine, an open-sourced data curation pipeline for automatically curating paired world-scale FPV & BEV data.
- **Repository:** https://github.com/MapItAnywhere/MapItAnywhere
## Uses
### Direct Use
This dataset is suitable for training and evaluating Bird's Eye View map models.
We have tested it in the paper for First-person-view to Bird's Eye view map prediction.
## Dataset Structure
```
ROOT
|
--- LOCATION_0 # location folder
| |
| +--- images # FPV Images (XX.jpg)
| +--- semantic_masks # Semantic Masks (XX.npz)
| +--- flood_fill # Visibility Masks (XX.npz)
| ---- dump.json # Camera pose information for IDs in LOCATION
| ---- image_points.parquet
| ---- image_metadata.parquet
| ---- image_metadata_filtered.parquet
| ---- image_metadata_filtered_processed.parquet
--- LOCATION_1
.
.
|
+-- LOCATION_2
--- README.md
--- samples.pdf # Visualization of sample data
```
## Dataset Creation
### Curation Rationale
The MIA data engine and dataset were created to accelerate research progress towards anywhere map prediction. Current map prediction research builds on only a few map prediction datasets released by autonomous vehicle companies, which cover very limited area. We therefore present the MIA data engine, a more scalable approach by sourcing from large-scale crowd-sourced mapping platforms, Mapillary for FPV images and OpenStreetMap for BEV semantic maps.
### Source Data
The MIA dataset includes data from two sources: [Mapillary](https://www.mapillary.com/) for First-Person-View (FPV) images, and [OpenStreetMap](https://www.openstreetmap.org) for Bird-Eye-View (BEV) maps.
For FPV retrieval, we leverage Mapillary, a massive public database, licensed under [CC BY-SA](https://creativecommons.org/licenses/by-sa/4.0/), with over 2 billion crowd-sourced images. The images span various weather and lighting conditions collected using diverse camera models and focal lengths. Furthermore, images are taken by pedestrians, vehicles, bicyclists, etc. This diversity enables the collection of more dynamic and difficult scenarios critical for anywhere map prediction.
When uploading to the Mapillary platform, users submit them under Mapillary's terms and all images shared are under a CC-BY-SA license, more details can be found in [Mapillary License Page](https://help.mapillary.com/hc/en-us/articles/115001770409-Licenses).
In addition, Mapillary integrates several mechanisms to minimize privacy concerns, such as applying technology to blur any faces and license plates, requiring users to notify if they observe any imageries that may contain personal data. More information can be found on the [Mapillary Privacy Policy page](https://www.mapillary.com/privacy).
For BEV retrieval, we leverage OpenStreetMap (OSM), a global crowd-sourced mapping platform open-sourced under [Open Data Commons Open Database License (ODbL)](https://opendatacommons.org/licenses/odbl/). OSM provides
rich vectorized annotations for streets, sidewalks, buildings, etc. OpenStreetMap has limitations on mapping private information where "it violates the privacy
of people living in this world", with guidelines found [here](https://wiki.openstreetmap.org/wiki/Limitations_on_mapping_private_information).
## Bias, Risks, and Limitations
While we show promising generalization performance on conventional datasets, we note that label noise inherently exists, to a higher degree
than manually collected data, in crowd sourced data, in both pose correspondence, and in BEV map labeling. Such noise is common across large-scale
automatically scraped/curated benchmarks such as ImageNet. While we recognize that our sampled dataset is biased towards locations in the US, our MIA data engine is
applicable to other world-wide locations.
Our work relies heavily on crowd sourced data putting the burden of data collection on people and open-source contributions.
<!-- ## Citation [optional] -->
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
<!-- **BibTeX:** -->
<!-- [More Information Needed] -->
<!-- **APA:** -->
<!-- [More Information Needed] -->
## Dataset Card Authors
Cherie Ho, Jiaye Zou, Omar Alama, Sai Mitheran Jagadesh Kumar, Benjamin Chiang, Taneesh Gupta, Chen Wang, Nikhil Keetha, Katia Sycara, Sebastian Scherer
## Dataset Card Contact
Cherie Ho (cherieh@andrew.cmu.edu) |