File size: 8,475 Bytes
22433f6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d63bad1
 
c2f3b69
 
d63bad1
 
c2f3b69
d63bad1
 
c2f3b69
 
 
 
22433f6
d63bad1
 
 
 
 
 
 
 
 
c2f3b69
d63bad1
 
 
73fd340
d63bad1
 
 
 
 
73fd340
 
 
d63bad1
 
 
 
 
 
 
 
37c9a21
 
 
 
 
 
 
 
 
 
 
 
 
 
73fd340
d63bad1
 
73fd340
 
 
 
 
d63bad1
 
 
 
 
 
 
 
 
 
73fd340
 
d63bad1
 
 
73fd340
 
 
 
 
 
 
 
 
 
d63bad1
 
73fd340
 
 
 
 
 
 
 
 
 
 
d63bad1
 
 
 
 
 
 
 
 
 
 
 
 
37c9a21
 
 
 
 
 
 
 
 
 
d63bad1
c2f3b69
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
---
dataset_info:
  features:
  - name: Claim
    dtype: string
  - name: Context
    dtype: string
  - name: Source
    dtype: string
  - name: Source Indices
    dtype: string
  - name: Relation
    dtype: string
  - name: Relation Indices
    dtype: string
  - name: Target
    dtype: string
  - name: Target Indices
    dtype: string
  - name: Inconsistent Claim Component
    dtype: string
  - name: Inconsistent Context-Span
    dtype: string
  - name: Inconsistent Context-Span Indices
    dtype: string
  - name: Inconsistency Type
    dtype: string
  - name: Fine-grained Inconsistent Entity-Type
    dtype: string
  - name: Coarse Inconsistent Entity-Type
    dtype: string
  splits:
  - name: train
    num_bytes: 2657091
    num_examples: 6443
  - name: validation
    num_bytes: 333142
    num_examples: 806
  - name: test
    num_bytes: 332484
    num_examples: 806
  download_size: 1784422
  dataset_size: 3322717
task_categories:
- token-classification
- text-classification
- text-generation
language:
- en
pretty_name: FICLE
size_categories:
- 1K<n<10K
license: gpl-3.0
tags:
- span
- explanation
---
# FICLE Dataset

The dataset can be loaded and utilized through the following:

```python
from datasets import load_dataset
ficle_data = load_dataset("tathagataraha/ficle")
```

# Dataset card for FICLE

## Dataset Description

* **GitHub Repo:** https://github.com/blitzprecision/FICLE
* **Paper:** 
* **Point of Contact:** 

### Dataset Summary

The FICLE dataset is a derivative of the FEVER dataset, which is a collection of 185,445 claims generated by modifying sentences obtained from Wikipedia. 
These claims were then verified without knowledge of the original sentences they were derived from. Each sample in the FEVER dataset consists of a claim sentence, a context sentence extracted from a Wikipedia URL as evidence, and a type label indicating whether the claim is supported, refuted, or lacks sufficient information.

### Languages

The FICLE Dataset contains only English.

## Dataset Structure

### Data Fields

* `Claim (string)`: A statement or proposition relating to the consistency or inconsistency of certain facts or information.
* `Context (string)`: The surrounding information or background against which the claim is being evaluated or compared. It provides additional details or evidence that can support or challenge the claim.
* `Source (string)`: It is the linguistic chunk containing the entity lying to the left of the main verb/relating chunk. 
* `Source Indices (string)`: Source indices refer to the specific indices or positions within the source string that indicate the location of the relevant information.
* `Relation (string)`: It is the linguistic chunk containing the verb/relation at the core of the identified inconsistency.
* `Relation Indices (string)`: Relation indices indicate the specific indices or positions within the relation string that highlight the location of the relevant information.
* `Target (string)`: It is the linguistic chunk containing the entity lying to the right of the main verb/relating chunk. 
* `Target Indices (string)`: Target indices represent the specific indices or positions within the target string that indicate the location of the relevant information.
* `Inconsistent Claim Component (string)`: The inconsistent claim component refers to a specific linguistic chunk within the claim that is identified as inconsistent with the context. It helps identify which part of the claim triple is problematic in terms of its alignment with the surrounding information.
* `Inconsistent Context-Span (string)`: A span or portion marked within the context sentence that is found to be inconsistent with the claim. It highlights a discrepancy or contradiction between the information in the claim and the corresponding context.
* `Inconsistent Context-Span Indices (string)`: The specific indices or location within the context sentence that indicate the inconsistent span.
* `Inconsistency Type (string)`:  The category or type of inconsistency identified in the claim and context.
* `Fine-grained Inconsistent Entity-Type (string)`: The specific detailed category or type of entity causing the inconsistency within the claim or context. It provides a more granular classification of the entity associated with the inconsistency.
* `Coarse Inconsistent Entity-Type (string)`: The broader or general category or type of entity causing the inconsistency within the claim or context. It provides a higher-level classification of the entity associated with the inconsistency.
  

### Data Splits
The FICLE dataset comprises a total of 8,055 samples in the English language, each representing different instances of inconsistencies. 
These inconsistencies are categorized into five types: Taxonomic Relations (4,842 samples), Negation (1,630 samples), Set Based (642 samples), Gradable (526 samples), and Simple (415 samples).

Within the dataset, there are six possible components that contribute to the inconsistencies found in the claim sentences. 
These components are distributed as follows: Target-Head (3,960 samples), Target-Modifier (1,529 samples), Relation-Head (951 samples), Relation-Modifier (1,534 samples), Source-Head (45 samples), and Source-Modifier (36 samples).

The dataset is split into `train`, `validation`, and `test`.
* `train`: 6.44k rows
* `validation`: 806 rows
* `test`: 806 rows

## Dataset Creation

### Curation Rationale

We propose a linguistically enriched dataset to help detect inconsistencies and explain them. 
To this end, the broad requirements are to locate where the inconsistency is present between a claim and a context and to have a classification scheme for better explainability.

### Data Collection and Preprocessing

The FICLE dataset is derived from the FEVER dataset, using the following-
ing processing steps. FEVER (Fact Extraction and VERification) consists of
185,445 claims were generated by altering sentences extracted from Wikipedia and
subsequently verified without knowledge of the sentence they were derived from.
Every sample in the FEVER dataset contains the claim sentence, evidence (or
context) sentence from a Wikipedia URL, and a type label (‘supports’, ‘refutes’, or
‘not enough info’). Out of these, we leverage only the samples with the ‘refutes’ label
to build our dataset.


### Annotations

You can see the annotation guidelines [here](https://github.com/blitzprecision/FICLE/blob/main/ficle_annotation_guidelines.pdf).

In order to provide detailed explanations for inconsistencies, extensive annotations were conducted for each sample in the FICLE dataset. The annotation process involved two iterations, with each iteration focusing on different aspects of the dataset.
In the first iteration, the annotations were primarily "syntactic-oriented." These fields included identifying the inconsistent claim fact triple, marking inconsistent context spans, and categorizing the six possible inconsistent claim components.
The second iteration of annotations concentrated on "semantic-oriented" aspects. Annotators labeled semantic fields for each sample, such as the type of inconsistency, coarse inconsistent entity types, and fine-grained inconsistent entity types. 
This stage aimed to capture the semantic nuances and provide a deeper understanding of the inconsistencies present in the dataset.

The annotation process was carried out by a group of four annotators, two of whom are also authors of the dataset. The annotators possess a strong command of the English language and hold Bachelor's degrees in Computer Science, specializing in computational linguistics. 
Their expertise in the field ensured accurate and reliable annotations. The annotators' ages range from 20 to 22 years, indicating their familiarity with contemporary language usage and computational linguistic concepts.


### Personal and Sensitive Information

## Considerations for Using the Data

### Social Impact of Dataset
 
### Discussion of Biases

### Other Known Limitations

## Additional Information

### Citation Information
```
@misc{raha2023neural,
      title={Neural models for Factual Inconsistency Classification with Explanations}, 
      author={Tathagata Raha and Mukund Choudhary and Abhinav Menon and Harshit Gupta and KV Aditya Srivatsa and Manish Gupta and Vasudeva Varma},
      year={2023},
      eprint={2306.08872},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
```

### Contact