Datasets:

ArXiv:
License:
File size: 5,327 Bytes
2d6056b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ed1f511
2d6056b
 
 
 
 
 
 
 
 
ed1f511
 
 
 
2d6056b
 
c4aaaa2
 
84c94b3
 
c4aaaa2
 
40491dd
9948645
c4aaaa2
 
 
9948645
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
40491dd
 
 
9948645
9f185dc
 
 
9948645
 
 
 
 
 
 
c4aaaa2
 
 
 
 
 
9948645
 
 
 
 
 
 
c4aaaa2
 
 
 
 
 
 
9948645
 
 
 
c4aaaa2
2d6056b
c4aaaa2
 
 
9948645
2d6056b
9948645
c4aaaa2
9948645
c4aaaa2
9948645
c4aaaa2
 
 
9948645
 
 
 
 
9f185dc
9948645
9f185dc
9948645
9f185dc
 
 
 
 
 
 
c4aaaa2
 
84c94b3
 
 
9f185dc
84c94b3
 
 
 
 
9f185dc
84c94b3
9f185dc
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
---
language:
- en
- ar
- es
- fr
- ru
- hi
- ms
- sw
- az
- ko
- pt
- hy
- th
- uk
- ur
- sr
- iw
- ja
- hr
- tl
- ky
- vi
- fa
- tg
- mg
- nl
- ne
- uz
- my
- da
- dz
- id
- is
- tr
- lo
- sl
- so
- mn
- bn
- bs
- ht
- el
- it
- to
- ka
- sn
- sq
- zh
pretty_name: BordIRlines
multilinguality:
- multilingual
annotations_creators:
- machine-generated
language_creators:
- found
source_datasets:
- manestay/borderlines
license: mit
task_categories:
- question-answering
arxiv: 2410.01171
---

# BordIRLines Dataset

This is the dataset associated with the paper "BordIRlines: A Dataset for Evaluating Cross-lingual Retrieval-Augmented Generation" ([link](https://arxiv.org/abs/2410.01171)).

## Dataset Summary

The **BordIRLines Dataset** is an information retrieval (IR) dataset constructed from various language corpora. It contains queries and corresponding ranked docs along with their relevance scores. The dataset includes multiple languages, including English, Arabic, Spanish, and others, and is split across different sources like LLM-based outputs.
Each `doc` is a passage from a Wikipedia article.

### Languages

The dataset includes docs and queries in the following __languages__:

* `en`: English
* `zht`: Traditional Chinese
* `ar`: Arabic
* `zhs`: Simplified Chinese
* `es`: Spanish
* `fr`: French
* `ru`: Russian
* `hi`: Hindi
* `ms`: Malay
* `sw`: Swahili
* `az`: Azerbaijani
* `ko`: Korean
* `pt`: Portuguese
* `hy`: Armenian
* `th`: Thai
* `uk`: Ukrainian
* `ur`: Urdu
* `sr`: Serbian
* `iw`: Hebrew
* `ja`: Japanese
* `hr`: Croatian
* `tl`: Tagalog
* `ky`: Kyrgyz
* `vi`: Vietnamese
* `fa`: Persian
* `tg`: Tajik
* `mg`: Malagasy
* `nl`: Dutch
* `ne`: Nepali
* `uz`: Uzbek
* `my`: Burmese
* `da`: Danish
* `dz`: Dzongkha
* `id`: Indonesian
* `is`: Icelandic
* `tr`: Turkish
* `lo`: Lao
* `sl`: Slovenian
* `so`: Somali
* `mn`: Mongolian
* `bn`: Bengali
* `bs`: Bosnian
* `ht`: Haitian Creole
* `el`: Greek
* `it`: Italian
* `to`: Tonga
* `ka`: Georgian
* `sn`: Shona
* `sq`: Albanian
* `control`: see below

The **control** language is English, and contains the queries for all 251 territories. In contrast, **en** is only the 38 territories which have an English-speaking claimant.

## Systems
We have processed retrieval results for these IR systems:
* `openai`: OpenAI's model `text-embedding-3-large`, cosine similarity
* `m3`: M3-embedding ([link](https://huggingface.co/BAAI/bge-m3)) (Chen et al., 2024)

## Modes
Considering a user query in language `l` on a territory `t`, there are 4 modes for the IR.
* `qlang`: consider passages in `{l}`. This is monolingual IR (the default).
* `qlang_en`: consider passages in either `{l, en}`.
* `en`: consider passages in `{en}`.
* `rel_langs`:  consider passages in all relevant languages to `t` + `en`, so `{l1, l2, ..., en}`. This is a set, so `en` will not be duplicated if it already is relevant.

## Dataset Structure

### Data Fields

The dataset consists of the following fields:
* `query_id (string)`: The id of the query.
* `query (string)`: The query text as provided by the `queries.tsv` file.
* `territory (string)`: The territory of the query hit.
* `rank (int32)`: The rank of the document for the corresponding query.
* `score (float32)`: The relevance score of the document as provided by the search engine or model.
* `doc_id (string)`: The unique identifier of the article.
* `doc_text (string)`: The full text of the corresponding article or document.

### Download Structure

The dataset is structured as follows:

```
data/
  {lang}/
    {system}/
      {mode}/
        {lang}_query_hits.tsv
...
  all_docs.json
  queries.tsv
```

* `queries.tsv`: Contains the list of query IDs and their associated query texts.
* `all_docs.json`: JSON dict containing all docs. It is organized as a nested dict, with keys `lang`, and values another dict with keys `doc_id`, and values `doc_text`.
* `{lang}\_query_hits.tsv`: A TSV file with relevance scores and hit ranks for queries.

Currently, there are 50 langs * 1 system * 4 modes = 200 query hit TSV files.

## Example Usage

```python
from datasets import load_dataset

# load DatasetDict with all 4 modes, for control language, 10 hits
dsd_control = load_dataset("borderlines/bordirlines", "control")

# load Dataset for English, with rel_langs mode, 50 hits
ds_oa_en = load_dataset("borderlines/bordirlines", "en", split="openai.rel_langs", n_hits=50)
# load Dataset for Simplified Chinese, en mode
ds_oa_zhs1 = load_dataset("borderlines/bordirlines", "zhs", split="openai.en")
# load Dataset for Simplified Chinese, qlang mode
ds_oa_zhs2 = load_dataset("borderlines/bordirlines", "zhs", split="openai.qlang")


# load Dataset for Simplified Chinese, en mode, m3 embedding
ds_m3_zhs1 = load_dataset("borderlines/bordirlines", "zhs", split="m3.en")
# load Dataset for Simplified Chinese, qlang mode, m3 embedding
ds_m3_zhs2 = load_dataset("borderlines/bordirlines", "zhs", split="m3.qlang")
```

## Citation
```
@misc{li2024bordirlines,
      title={BordIRlines: A Dataset for Evaluating Cross-lingual Retrieval-Augmented Generation},
      author={Bryan Li and Samar Haider and Fiona Luo and Adwait Agashe and Chris Callison-Burch},
      year={2024},
      eprint={2410.01171},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2410.01171},
}
```