image
imagewidth (px)
512
512

Dataset Card for Map It Anywhere (MIA)

The Map It Anywhere (MIA) dataset contains map-prediction-ready data curated from public datasets.

Dataset Details

Dataset Description

The Map It Anywhere (MIA) dataset contains 1.2 million high quality first-person-view (FPV) and bird's eye view (BEV) map pairs covering 470 squared km, thereby facilitating future map prediction research on generalizability and robustness. The dataset is curated using the MIA data engine to sample from six urban-centered location: New York, Chicago, Houston, Los Angeles, Pittsburgh and San Francisco.

  • Curated by: Airlab at CMU (Cherie Ho, Jiaye Zou, Omar Alama, Sai Mitheran Jagadesh Kumar, Benjamin Chiang, Taneesh Gupta, Chen Wang, Nikhil Keetha, Katia Sycara, Sebastian Scherer)
  • License: The first-person-view images and the associated metadata of MIA dataset is published under CC-By-SA following Mapillary. The bird’s eye view map of MIA dataset is published under ODbL following OpenStreetMap.

Dataset Sources

The MIA dataset is generated using the MIA data engine, an open-sourced data curation pipeline for automatically curating paired world-scale FPV & BEV data.

Uses

Direct Use

This dataset is suitable for training and evaluating Bird's Eye View map models. We have tested it in the paper for First-person-view to Bird's Eye view map prediction.

Dataset Structure

ROOT
|
--- LOCATION_0                             # location folder
|       |
|       +--- images                          # FPV Images (XX.jpg)
|       +--- semantic_masks                  # Semantic Masks (XX.npz)
|       +--- flood_fill                      # Visibility Masks (XX.npz)
|       ---- dump.json                       # Camera pose information for IDs in LOCATION
|       ---- image_points.parquet
|       ---- image_metadata.parquet
|       ---- image_metadata_filtered.parquet
|       ---- image_metadata_filtered_processed.parquet 
--- LOCATION_1                             
.
.
|
+-- LOCATION_2
--- README.md
--- samples.pdf # Visualization of sample data

Dataset Creation

Curation Rationale

The MIA data engine and dataset were created to accelerate research progress towards anywhere map prediction. Current map prediction research builds on only a few map prediction datasets released by autonomous vehicle companies, which cover very limited area. We therefore present the MIA data engine, a more scalable approach by sourcing from large-scale crowd-sourced mapping platforms, Mapillary for FPV images and OpenStreetMap for BEV semantic maps.

Source Data

The MIA dataset includes data from two sources: Mapillary for First-Person-View (FPV) images, and OpenStreetMap for Bird-Eye-View (BEV) maps.

For FPV retrieval, we leverage Mapillary, a massive public database, licensed under CC BY-SA, with over 2 billion crowd-sourced images. The images span various weather and lighting conditions collected using diverse camera models and focal lengths. Furthermore, images are taken by pedestrians, vehicles, bicyclists, etc. This diversity enables the collection of more dynamic and difficult scenarios critical for anywhere map prediction. When uploading to the Mapillary platform, users submit them under Mapillary's terms and all images shared are under a CC-BY-SA license, more details can be found in Mapillary License Page. In addition, Mapillary integrates several mechanisms to minimize privacy concerns, such as applying technology to blur any faces and license plates, requiring users to notify if they observe any imageries that may contain personal data. More information can be found on the Mapillary Privacy Policy page.

For BEV retrieval, we leverage OpenStreetMap (OSM), a global crowd-sourced mapping platform open-sourced under Open Data Commons Open Database License (ODbL). OSM provides rich vectorized annotations for streets, sidewalks, buildings, etc. OpenStreetMap has limitations on mapping private information where "it violates the privacy of people living in this world", with guidelines found here.

Bias, Risks, and Limitations

While we show promising generalization performance on conventional datasets, we note that label noise inherently exists, to a higher degree than manually collected data, in crowd sourced data, in both pose correspondence, and in BEV map labeling. Such noise is common across large-scale automatically scraped/curated benchmarks such as ImageNet. While we recognize that our sampled dataset is biased towards locations in the US, our MIA data engine is applicable to other world-wide locations. Our work relies heavily on crowd sourced data putting the burden of data collection on people and open-source contributions.

Dataset Card Authors

Cherie Ho, Jiaye Zou, Omar Alama, Sai Mitheran Jagadesh Kumar, Benjamin Chiang, Taneesh Gupta, Chen Wang, Nikhil Keetha, Katia Sycara, Sebastian Scherer

Dataset Card Contact

Cherie Ho (cherieh@andrew.cmu.edu)

Downloads last month
2
Edit dataset card