Datasets:

Modalities:
Image
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 7,028 Bytes
803bee3
8fbf97b
803bee3
 
 
 
abcea4f
803bee3
abcea4f
803bee3
 
 
ff7f81c
803bee3
abcea4f
803bee3
abcea4f
803bee3
9f89aa1
 
5da848f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
73febcd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9f89aa1
 
 
 
 
 
 
 
 
 
 
 
 
803bee3
 
 
 
 
9f89aa1
 
 
5da848f
 
 
 
 
 
 
 
73febcd
 
 
 
 
 
 
 
 
7737d71
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
---
license: cdla-permissive-2.0
dataset_info:
  - config_name: maze
    features: 
    - name: id
      dtype: int32
    - name: image
      dtype: image
    - name: prompt
      dtype: string
    - name: ground_truth
      dtype: string
    - name: task
      dtype: string
    - name: question_type
      dtype: string
    - name: target_options
      dtype: string
  - config_name: maze_text_only
    features:
    - name: id
      dtype: int32
    - name: prompt
      dtype: string
    - name: ground_truth
      dtype: string
    - name: task
      dtype: string
    - name: question_type
      dtype: string
    - name: target_options
      dtype: string
  - config_name: spatial_grid
    features:
    - name: id
      dtype: int32
    - name: image
      dtype: image
    - name: prompt
      dtype: string
    - name: ground_truth
      dtype: string
    - name: task
      dtype: string
    - name: question_type
      dtype: string
    - name: target_options
      dtype: string
  - config_name: spatial_grid_text_only
    features:
    - name: id
      dtype: int32
    - name: prompt
      dtype: string
    - name: ground_truth
      dtype: string
    - name: task
      dtype: string
    - name: question_type
      dtype: string
    - name: target_options
      dtype: string
  - config_name: spatial_map
    features:
    - name: id
      dtype: int32
    - name: image
      dtype: image
    - name: prompt
      dtype: string
    - name: ground_truth
      dtype: string
    - name: task
      dtype: string
    - name: question_type
      dtype: string
    - name: target_options
      dtype: string 
  - config_name: spatial_map_text_only
    features:
    - name: id
      dtype: int32
    - name: prompt
      dtype: string
    - name: ground_truth
      dtype: string
    - name: task
      dtype: string
    - name: question_type
      dtype: string
    - name: target_options
      dtype: string      
configs:
  - config_name: maze
    data_files:
    - split: val
      path: maze/maze_val.parquet
  - config_name: maze_text_only
    data_files:
    - split: val
      path: maze/maze_text_only_val.parquet
  - config_name: spatial_grid
    data_files:
    - split: val
      path: spatial_grid/spatial_grid_val.parquet
  - config_name: spatial_grid_text_only
    data_files:
    - split: val
      path: spatial_grid/spatial_grid_text_only_val.parquet
  - config_name: spatial_map
    data_files:
    - split: val
      path: spatial_map/spatial_map_val.parquet
  - config_name: spatial_map_text_only
    data_files:
    - split: val
      path: spatial_map/spatial_map_text_only_val.parquet       
---

A key question for understanding multimodal vs. language capabilities of models is what is
the relative strength of the spatial reasoning and understanding in each modality, as spatial understanding is
expected to be a strength for multimodality? To test this we created a procedurally generatable, synthetic dataset
to testing spatial reasoning, navigation, and counting. These datasets are challenging and by
being procedurally generated new versions can easily be created to combat the effects of models being trained
on this data and the results being due to memorization. For each task, each question has an image and a text
representation that is sufficient for answering each question.


This dataset has three tasks that test: Spatial Understanding (Spatial-Map), Nav-
igation (Maze), and Counting (Spatial-Grid). Each task has three conditions, with respect to the input
modality, 1) text-only, input and a question, 2) vision-only, which is the standard task of visual-question an-
swering that consists of a vision-only input and a question, and 3) vision-text includes both text and image
representations with the question. Each condition includes 1500
images and text pairs for a total of 4500.

__Spatial Map__

The dataset consists of spatial relationships for random layouts of symbolic objects with text names on  white background. 
Each object is associated with a unique location name, such as Unicorn Umbrellas and Gale Gifts. To study the impact of modality,
the textual representation of each input consists of pairwise relations such as Brews Brothers Pub
is to the Southeast of Whale’s Watches. The questions include asking about the spatial
relationships between two locations and the number of objects that meet specific spatial criteria.

The dataset includes 3 conditions: text only, image only, and text+image. Each condition includes 1500 images and text pairs for a total of 4500.

There are 3 question types: 
	1) In which direction is one object to another (answer is a direction)
	2) Which object is to the direction of another (answer is an object name)
	3) How many objects are in a direction of another (answer is a number)

Each question is multiple choice. 

__Maze__

The dataset consists of small mazes with questions asked about the maze. Each sample can be 
represented as colored blocks where different colors signify distinct elements: a green block marks
the starting point (S), a red block indicates the exit (E), black blocks represent impassable walls,
white blocks denote navigable paths, and blue blocks trace the path from S to E. The objective is to
navigate from S to E following the blue path, with movement permitted in the four cardinal directions
(up, down, left, right). Alternatively, each input can be depicted in textual format using ASCII code.
The questions asked include counting the number of turns from S to E and determining the spatial relationship 
between S and E. 

The dataset includes 3 conditions: text only, image only, and text+image.  Each condition includes 1500 images and text pairs for a total of 4500.

There are 3 question types: 
	1) How many right turns on the path from start to end (answer is a number)
	2) How many total turns on the path from start to end (answer is a number)
	3) Where is the exit releative to the start (answer is direction or yes/no)

Each question is multiple choice. 

__Spatial Grid__

Each input consists of a grid of cells, each containing an image (e.g.,a rabbit). Alternatively, this grid 
can also be represented in a purely textual format; for instance, the first row might be described as: 
elephant | cat | giraffe | elephant | cat. The evaluations focus on tasks such as counting specific objects (e.g., rabbits) and
identifying the object located at a specific coordinate in the grid (e.g., first row, second column).

The dataset includes 3 conditions: text only, image only, and text+image.  Each condition includes 1500 images and text pairs for a total of 4500 questions.
 
There are 3 question types: 
	1) How many blocks contain a specific animal (answer is a number)
	2) What animal is in one specific block, adressed by top-left, top, right, etc. (answer is an animal name)
	3) What animal is in one specific block, addressed by row, column (answer is an animal name)

Each question is multiple choice. 

---
More details here: https://arxiv.org/pdf/2406.14852