Datasets:

Modalities:
Image
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
topyun commited on
Commit
86e5e97
1 Parent(s): c4ed33c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -35,11 +35,11 @@ configs:
35
 
36
  <!-- Provide a quick summary of the dataset. -->
37
  <p align="center">
38
- <img src="resources/problems.png" :height="300px" width="600px">
39
  </p>
40
 
41
  <p align="center">
42
- <img src="resources/examples.png" :height="400px" width="800px">
43
  </p>
44
 
45
  SPARK can reduce the fundamental multi-vision sensor information gap between images and multi-vision sensors. We generated 6,248 vision-language test samples automatically to investigate multi-vision sensory perception and multi-vision sensory reasoning on physical sensor knowledge proficiency across different formats, covering different types of sensor-related questions.
 
35
 
36
  <!-- Provide a quick summary of the dataset. -->
37
  <p align="center">
38
+ <img src="https://raw.githubusercontent.com/top-yun/SPARK/resources/problems.png" :height="300px" width="600px">
39
  </p>
40
 
41
  <p align="center">
42
+ <img src="https://raw.githubusercontent.com/top-yun/SPARK/resources/examples.png" :height="400px" width="800px">
43
  </p>
44
 
45
  SPARK can reduce the fundamental multi-vision sensor information gap between images and multi-vision sensors. We generated 6,248 vision-language test samples automatically to investigate multi-vision sensory perception and multi-vision sensory reasoning on physical sensor knowledge proficiency across different formats, covering different types of sensor-related questions.