File size: 3,964 Bytes
225f222 da8c5c4 ab59eb5 da8c5c4 affff20 830681d affff20 fff5915 fed9c65 c4ed33c edef0c6 c4ed33c affff20 644e486 affff20 c4ed33c affff20 4a729ae affff20 830681d affff20 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 |
---
license: apache-2.0
size_categories:
- 1K<n<10K
dataset_info:
features:
- name: id
dtype: int32
- name: image
dtype: image
- name: sensor_type
dtype: string
- name: question_type
dtype: string
- name: question
dtype: string
- name: question_query
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1455392605.0
num_examples: 6248
download_size: 903353168
dataset_size: 1455392605.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# ⚡ SPARK (multi-vision Sensor Perception And Reasoning benchmarK)
[**🌐 github**](https://github.com/top-yun/SPARK) | [**🤗 Dataset**](https://huggingface.co/datasets/topyun/SPARK) | [**📃 Paper**](https://arxiv.org/abs/2408.12114)
## Dataset Details
<p align="center">
<img src="https://raw.githubusercontent.com/top-yun/SPARK/main/resources/examples.png" :height="400px" width="800px">
</p>
SPARK can reduce the fundamental multi-vision sensor information gap between images and multi-vision sensors. We generated 6,248 vision-language test samples automatically to investigate multi-vision sensory perception and multi-vision sensory reasoning on physical sensor knowledge proficiency across different formats, covering different types of sensor-related questions.
## Uses
you can easily download the dataset as follows:
```python
from datasets import load_dataset
test_dataset = load_dataset("topyun/SPARK", split="train")
```
Additionally, we have provided two example codes for evaluation: Open Model([**test.py**](https://github.com/top-yun/SPARK/blob/main/test.py)) and Closed Model([**test_closed_models.py**](https://github.com/top-yun/SPARK/blob/main/test_closed_models.py)). You can easily run them as shown below.
If you have 4 GPUs and want to run the experiment with llava-1.5-7b, you can do the following:
```bash
accelerate launch --config_file utils/ddp_accel_fp16.yaml \
--num_processes=4 \
test.py \
--batch_size 1 \
--model llava \
```
When running the closed model, make sure to insert your API KEY into the [**config.py**](https://github.com/top-yun/SPARK/blob/main/config.py) file.
If you have 1 GPU and want to run the experiment with gpt-4o, you can do the following:
```bash
accelerate launch --config_file utils/ddp_accel_fp16.yaml \
--num_processes=$n_gpu \
test_closed_models.py \
--batch_size 8 \
--model gpt \
--multiprocess True \
```
### Tips
The evaluation method we've implemented simply checks whether 'A', 'B', 'C', 'D', 'yes', or 'no' appears at the beginning of the sentence.
So, if the model you're evaluating provides unexpected answers (e.g., "'B'ased on ..." or "'C'onsidering ..."), you can resolve this by adding "Do not include any additional text." at the end of the prompt.
### Source Data
#### Data Collection and Processing
These instructions are built from five public datasets: [MS-COCO](https://arxiv.org/abs/1405.0312), [M3FD](https://arxiv.org/abs/2203.16220v1), [Dog&People](https://public.roboflow.com/object-detection/thermal-dogs-and-people), [RGB-D scene dataset](https://arxiv.org/abs/2110.11590), and [UNIFESP X-ray Body Part Classifier Competition dataset](https://www.kaggle.com/competitions/unifesp-x-ray-body-part-classifier).
## Citation
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```bibtex
@misc{yu2024sparkmultivisionsensorperception,
title={SPARK: Multi-Vision Sensor Perception and Reasoning Benchmark for Large-scale Vision-Language Models},
author={Youngjoon Yu and Sangyun Chung and Byung-Kwan Lee and Yong Man Ro},
year={2024},
eprint={2408.12114},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2408.12114},
}
```
## Contact
[SangYun Chung](https://sites.google.com/view/sang-yun-chung/profile): jelarum@kaist.ac.kr |