Datasets:

Modalities:
Image
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
SPARK / README.md
topyun's picture
Update README.md
644e486 verified
|
raw
history blame
1.8 kB
metadata
license: apache-2.0
size_categories:
  - 1K<n<10K
dataset_info:
  features:
    - name: id
      dtype: int32
    - name: image
      dtype: image
    - name: sensor_type
      dtype: string
    - name: question_type
      dtype: string
    - name: question
      dtype: string
    - name: question_query
      dtype: string
    - name: answer
      dtype: string
  splits:
    - name: train
      num_bytes: 1455392605
      num_examples: 6248
  download_size: 903353168
  dataset_size: 1455392605
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

SPARK (multi-vision Sensor Perception And Reasoning benchmarK)

SPARK can reduce the fundamental multi-vision sensor information gap between images and multi-vision sensors. We generated 6,248 vision-language test samples automatically to investigate multi-vision sensory perception and multi-vision sensory reasoning on physical sensor knowledge proficiency across different formats, covering different types of sensor-related questions.

Dataset Details

Uses

Direct Use

Source Data

Data Collection and Processing

These instructions are built from five public datasets: MS-COCO, M3FD, Dog&People, RGB-D scene dataset, and UNIFESP X-ray Body Part Classifier Competition dataset.

Citation

BibTeX:

[More Information Needed]

APA:

[More Information Needed]

Contact

SangYun Chung: jelarum@kaist.ac.kr