Datasets:

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
NevIR / README.md
orionweller's picture
Update README.md
6263585
---
license: mit
language:
- en
language_creators:
- crowdsourced
multilinguality:
- monolingual
pretty_name: NevIR
size_categories:
- 1K<n<10K
tags:
- negation
- information_retrieval
- IR
---
# Dataset Card for NevIR: Negation in Neural Information Retrieval
## Dataset Description
- **Repository:** [https://github.com/orionw/NevIR](https://github.com/orionw/NevIR)
- **Paper:** [https://arxiv.org/abs/2212.10002](https://arxiv.org/abs/2212.10002)
- **Point of Contact:** oweller@cs.jhu.edu
## Dataset Summary
Data from the paper: ["NevIR: Negation in Neural Information Retrieval"](https://arxiv.org/abs/2305.07614).
If you use this dataset, we would appreciate you citing our work:
```
@inproceedings{weller-et-al-2023-nevir,
title={NevIR: Negation in Neural Information Retrieval},
author={Weller, Orion and Lawrie, Dawn, and Van Durme, Benjamin},
year={2023},
eprint={2305.07614},
archivePrefix={arXiv},
year={2023}
}
```
Please also consider citing the work that created the initial documents:
```
@inproceedings{ravichander-et-al-2022-condaqa,
title={CONDAQA: A Contrastive Reading Comprehension Dataset for Reasoning about Negation},
author={‪Ravichander‬, Abhilasha and Gardner, Matt and Marasovi\'{c}, Ana},
proceedings={EMNLP 2022},
year={2022}
}
```
From the paper: "Negation is a common everyday phenomena and has been a consistent area of weakness for language models (LMs). Although the Information Retrieval (IR) community has adopted LMs as the backbone of modern IR architectures, there has been little to no research in understanding how negation impacts neural IR. We therefore construct a straightforward benchmark on this theme: asking IR models to rank two documents that differ only by negation. We show that the results vary widely according to the type of IR architecture: cross-encoders perform best, followed by late-interaction models, and in last place are bi-encoder and sparse neural architectures. We find that most current information retrieval models do not consider negation, performing similarly or worse than randomly ranking.We show that although the obvious approach of continued fine-tuning on a dataset of contrastive documents containing negations increases performance (as does model size), there is still a large gap between machine and human performance."
### Supported Tasks and Leaderboards
The task is to rank each query in the pair correctly, where only one query is relevant to one document in the pair. There is no official leaderboard.
### Language
English
## Dataset Structure
### Data Instances
Here's an example instance:
```
{
"id": "1-2",
"WorkerId": 0,
"q1": "Which mayor did more vetoing than anticipated?",
"q2": "Which mayor did less vetoing than anticipated?",
"doc1": "In his first year as mayor, Medill received very little legislative resistance from the Chicago City Council. While he vetoed what was an unprecedented eleven City Council ordinances that year, most narrowly were involved with specific financial practices considered wasteful and none of the vetoes were overridden. He used his new powers to appoint the members of the newly constituted Chicago Board of Education and the commissioners of its constituted public library. His appointments were approved unanimously by the City Council.",
"doc2": "In his first year as mayor, Medill received very little legislative resistance from the Chicago City Council. While some expected an unprecedented number of vetoes, in actuality he only vetoed eleven City Council ordinances that year, and most of those were narrowly involved with specific financial practices he considered wasteful and none of the vetoes were overridden. He used his new powers to appoint the members of the newly constituted Chicago Board of Education and the commissioners of its constituted public library. His appointments were approved unanimously by the City Council."
}
```
### Data Fields
* `id`: unique ID for the pair, the first number indicates the document pair number in CondaQA and the second number indicates the PassageEditID in CondaQA.
* `WorkerId`: The ID for the Worker who created the queries for the pair.
* `q1`: the query that is only relevant to `doc1`
* `q2`: the query that is only relevant to `doc2`
* `doc1`: the original document, from CondaQA
* `doc2`: the edited document, from CondaQA
### Data Splits
Data splits can be accessed as:
```
from datasets import load_dataset
train_set = load_dataset("orionweller/nevir", "train")
dev_set = load_dataset("orionweller/nevir", "validation")
test_set = load_dataset("orionweller/nevir", "test")
```
## Dataset Creation
Full details are in the paper: https://arxiv.org/abs/2305.07614