File size: 2,534 Bytes
32fef37
 
 
a92cac7
32fef37
 
 
 
 
 
581e583
 
a92cac7
581e583
649a592
32fef37
 
a92cac7
 
 
 
 
 
 
 
 
 
 
581e583
 
a92cac7
581e583
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a92cac7
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
## Overview
The original dataset can be found [here](https://github.com/swarnaHub/ConjNLI). It has been
proposed in [ConjNLI: Natural Language Inference Over Conjunctive Sentences](https://aclanthology.org/2020.emnlp-main.661/).

This dataset is a stress test for natural language inference over conjunctive sentences, 
where the premise differs from the hypothesis by conjuncts removed, added, or replaced.


## Dataset curation
No curation is performed. This dataset is "as-is". The label mapping is the usual `{"entailment": 0, "neutral": 1, "contradiction": 2}`
used in NLI datasets. Note that labels for `test` split are not available. 
Also, the `train` split is originally named `adversarial_train_15k`.

Note that there are 2 instances (join on "premise", "hypothesis", "label") present both in `train` and `dev`.


## Code to create the dataset
```python
import pandas as pd
from datasets import Dataset, ClassLabel, Value, Features, DatasetDict

# download data from repo https://github.com/swarnaHub/ConjNLI
paths = {
    "train": "<path_to_folder>/ConjNLI-master/data/NLI/adversarial_train_15k.tsv",
    "dev": "<path_to_folder>/ConjNLI-master/data/NLI/conj_dev.tsv",
    "test": "<path_to_folder>/ConjNLI-master/data/NLI/conj_test.tsv",
}

dataset_splits = {}
for split, path in paths.items():

    # load data
    df = pd.read_csv(paths[split], sep="\t")

    # encode labels using the default mapping used by other nli datasets
    # i.e, entailment: 0, neutral: 1, contradiction: 2
    df.columns = df.columns.str.lower()
    if not "test" in path:
        df["label"] = df["label"].map({"entailment": 0, "neutral": 1, "contradiction": 2})

    else:
        df["label"] = -1

    # cast to dataset
    features = Features({
        "premise": Value(dtype="string", id=None),
        "hypothesis": Value(dtype="string", id=None),
        "label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]),
    })
    dataset = Dataset.from_pandas(df, features=features)
    dataset_splits[split] = dataset

conj_nli = DatasetDict(dataset_splits)
conj_nli.push_to_hub("pietrolesci/conj_nli", token="<token>")


# check overlap between splits
from itertools import combinations
for i, j in combinations(conj_nli.keys(), 2):
    print(
        f"{i} - {j}: ",
        pd.merge(
            conj_nli[i].to_pandas(), 
            conj_nli[j].to_pandas(), 
            on=["premise", "hypothesis", "label"], how="inner"
        ).shape[0],
    )
#> train - dev:  2
#> train - test:  0
#> dev - test:  0
```