Datasets:
johngiorgi
commited on
Commit
β’
f8e89b4
1
Parent(s):
02af0df
Update README.md
Browse files
README.md
CHANGED
@@ -4,309 +4,45 @@ task_categories:
|
|
4 |
- text-classification
|
5 |
language:
|
6 |
- en
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
7 |
---
|
8 |
-
#
|
9 |
|
10 |
-
The Only Connect Wall (OCW) dataset contains 618 _"Connecting Walls"_ from the [Round 3: Connecting Wall](https://en.wikipedia.org/wiki/Only_Connect#Round_3:_Connecting_Wall) segment of the [Only Connect quiz show](https://en.wikipedia.org/wiki/Only_Connect), collected from 15 seasons' worth of episodes. Each wall contains the ground-truth __groups__ and __connections__ as well as recorded human performance. Please see [our paper](https://arxiv.org/abs/2306.11167) for more details about the dataset and its motivations.
|
11 |
|
12 |
-
##
|
13 |
-
|
14 |
-
- [𧩠Only Connect Wall (OCW) Dataset](#-only-connect-wall-ocw-dataset)
|
15 |
-
- [π Table of Contents](#-table-of-contents)
|
16 |
-
- [π Usage](#-usage)
|
17 |
-
- [Downloading the dataset](#downloading-the-dataset)
|
18 |
-
- [Dataset structure](#dataset-structure)
|
19 |
-
- [Loading the dataset](#loading-the-dataset)
|
20 |
-
- [Evaluating](#evaluating)
|
21 |
-
- [Downloading easy datasets for ablation studies](#downloading-easy-datasets-for-ablation-studies)
|
22 |
-
- [Running the baselines](#running-the-baselines)
|
23 |
-
- [Word Embeddings and Pre-trained Language Models](#word-embeddings-and-pre-trained-language-models)
|
24 |
-
- [Large Language Models](#large-language-models)
|
25 |
-
- [βοΈ Contributing](#οΈ-contributing)
|
26 |
-
- [π Citing](#-citing)
|
27 |
-
- [π Acknowledgements](#-acknowledgements)
|
28 |
-
|
29 |
-
## π Usage
|
30 |
-
|
31 |
-
### Downloading the dataset
|
32 |
-
|
33 |
-
The dataset can be downloaded from [here](https://www.cs.toronto.edu/~taati/OCW/OCW.tar.gz) or with a bash script:
|
34 |
-
|
35 |
-
```bash
|
36 |
-
bash download_OCW.sh
|
37 |
-
```
|
38 |
-
|
39 |
-
### Dataset structure
|
40 |
-
|
41 |
-
The dataset is provided as JSON files, one for each partition: `train.json`, `validation.json` and `test.json`. We also provide a `OCW.json` file that contains all examples across all splits. The splits are sized as follows:
|
42 |
-
|
43 |
-
| Split | # Walls |
|
44 |
-
|:-------|:---------:|
|
45 |
-
| `train` | 62 |
|
46 |
-
| `validation` | 62 |
|
47 |
-
| `test` | 494 |
|
48 |
-
|
49 |
-
Here is an example of the dataset's structure:
|
50 |
-
|
51 |
-
```json
|
52 |
-
{
|
53 |
-
"season_to_walls_map": {
|
54 |
-
"1": {
|
55 |
-
"num_walls": 30,
|
56 |
-
"start_date": "15/09/2008",
|
57 |
-
"end_date": "22/12/2008"
|
58 |
-
}
|
59 |
-
},
|
60 |
-
"dataset": [
|
61 |
-
{
|
62 |
-
"wall_id": "882c",
|
63 |
-
"season": 1,
|
64 |
-
"episode": 5,
|
65 |
-
"words": [
|
66 |
-
"Puzzle",
|
67 |
-
"Manhattan",
|
68 |
-
"B",
|
69 |
-
"Wrench",
|
70 |
-
"Smith",
|
71 |
-
"Nuts",
|
72 |
-
"Brooks",
|
73 |
-
"Blanc",
|
74 |
-
"Suit",
|
75 |
-
"Screwdriver",
|
76 |
-
"Sidecar",
|
77 |
-
"Margarita",
|
78 |
-
"Hammer",
|
79 |
-
"Business",
|
80 |
-
"Gimlet",
|
81 |
-
"Gibson"
|
82 |
-
],
|
83 |
-
"gt_connections": [
|
84 |
-
"Famous Mels",
|
85 |
-
"Household tools",
|
86 |
-
"Cocktails",
|
87 |
-
"Monkey ___"
|
88 |
-
],
|
89 |
-
"groups": {
|
90 |
-
"group_1": {
|
91 |
-
"group_id": "882c_01",
|
92 |
-
"gt_words": [
|
93 |
-
"Blanc",
|
94 |
-
"Brooks",
|
95 |
-
"B",
|
96 |
-
"Smith"
|
97 |
-
],
|
98 |
-
"gt_connection": "Famous Mels",
|
99 |
-
"human_performance": {
|
100 |
-
"grouping": 1,
|
101 |
-
"connection": 1
|
102 |
-
}
|
103 |
-
},
|
104 |
-
"group_2": {
|
105 |
-
"group_id": "882c_02",
|
106 |
-
"gt_words": [
|
107 |
-
"Screwdriver",
|
108 |
-
"Hammer",
|
109 |
-
"Gimlet",
|
110 |
-
"Wrench"
|
111 |
-
],
|
112 |
-
"gt_connection": "Household tools",
|
113 |
-
"human_performance": {
|
114 |
-
"grouping": 1,
|
115 |
-
"connection": 1
|
116 |
-
}
|
117 |
-
},
|
118 |
-
"group_3": {
|
119 |
-
"group_id": "882c_03",
|
120 |
-
"gt_words": [
|
121 |
-
"Sidecar",
|
122 |
-
"Manhattan",
|
123 |
-
"Gibson",
|
124 |
-
"Margarita"
|
125 |
-
],
|
126 |
-
"gt_connection": "Cocktails",
|
127 |
-
"human_performance": {
|
128 |
-
"grouping": 1,
|
129 |
-
"connection": 1
|
130 |
-
}
|
131 |
-
},
|
132 |
-
"group_4": {
|
133 |
-
"group_id": "882c_04",
|
134 |
-
"gt_words": [
|
135 |
-
"Puzzle",
|
136 |
-
"Business",
|
137 |
-
"Nuts",
|
138 |
-
"Suit"
|
139 |
-
],
|
140 |
-
"gt_connection": "Monkey ___",
|
141 |
-
"human_performance": {
|
142 |
-
"grouping": 1,
|
143 |
-
"connection": 1
|
144 |
-
}
|
145 |
-
}
|
146 |
-
},
|
147 |
-
"overall_human_performance": {
|
148 |
-
"grouping": [
|
149 |
-
1,
|
150 |
-
1,
|
151 |
-
1,
|
152 |
-
1
|
153 |
-
],
|
154 |
-
"connections": [
|
155 |
-
1,
|
156 |
-
1,
|
157 |
-
1,
|
158 |
-
1
|
159 |
-
]
|
160 |
-
}
|
161 |
-
}
|
162 |
-
]
|
163 |
-
}
|
164 |
-
```
|
165 |
-
|
166 |
-
where
|
167 |
-
|
168 |
-
- `"season_to_walls_map"` contains the `"num_walls"` in each season, as well as the `"start_date"` and `"end_date"` the season ran
|
169 |
-
- `"dataset"` is a list of dictionaries, where each dictionary contains all accompanying information about a wall:
|
170 |
-
- `"wall_id"`: a unique string identifier for the wall
|
171 |
-
- `"season"`: an integer representing the season the wall was collected from
|
172 |
-
- `"episode"`: an integer representing the episode the wall was collected from
|
173 |
-
- `"words"`: a list of strings representing the words in the wall in random order
|
174 |
-
- `"gt_connections"`: a list of strings representing the ground truth connections of each group
|
175 |
-
- `"groups`: a dictionary of dictionaries containing the four groups in the wall, each has the following items:
|
176 |
-
- `"group_id"`: a unique string identifier for the group
|
177 |
-
- `"gt_words"`: a list of strings representing the ground truth words in the group
|
178 |
-
- `"gt_connection"`: a string representing the ground truth connection of the group
|
179 |
-
- `"human_performance`: a dictionary containing recorded human performance for the grouping and connections tasks
|
180 |
-
- `"overall_human_performance"`: a dictionary containing recorded human performance for the grouping and connections tasks for each group in the wall
|
181 |
-
|
182 |
-
### Loading the dataset
|
183 |
-
|
184 |
-
The three partitions can be loaded the same way as any other JSON file. For example, using Python:
|
185 |
-
|
186 |
-
```python
|
187 |
-
dataset = {
|
188 |
-
"train": json.load(open("./dataset/train.json", "r"))["dataset"],
|
189 |
-
"validation": json.load(open("./dataset/validation.json", "r"))["dataset"],
|
190 |
-
"test": json.load(open("./dataset/test.json", "r"))["dataset"],
|
191 |
-
}
|
192 |
-
```
|
193 |
-
|
194 |
-
However, it is likely easiest to work with the dataset using the HuggingFace Datasets library:
|
195 |
|
196 |
```python
|
197 |
# pip install datasets
|
198 |
-
|
199 |
-
"json",
|
200 |
-
data_files={
|
201 |
-
"train": "./dataset/train.json",
|
202 |
-
"validation": "./dataset/validation.json",
|
203 |
-
"test": "./dataset/test.json",
|
204 |
-
},
|
205 |
-
field="dataset",
|
206 |
-
)
|
207 |
-
```
|
208 |
-
|
209 |
-
### Evaluating
|
210 |
-
|
211 |
-
We provide a script for evaluating the performance of a model on the dataset. Before running, make sure you have installed the requirements and package:
|
212 |
-
|
213 |
-
```bash
|
214 |
-
pip install -r requirements.txt
|
215 |
-
pip install -e .
|
216 |
-
```
|
217 |
|
218 |
-
|
219 |
|
220 |
-
|
221 |
-
|
222 |
-
|
223 |
-
|
224 |
-
|
225 |
-
["Smith", "Nuts", "Brooks", "Blanc"],
|
226 |
-
["Suit", "Screwdriver", "Sidecar", "Margarita"],
|
227 |
-
["Hammer", "Business", "Gimlet", "Gibson"]
|
228 |
-
],
|
229 |
-
"predicted_connections": ["Famous Mels", "Household tools", "Cocktails", "Monkey ___"]
|
230 |
-
}]
|
231 |
```
|
232 |
|
233 |
-
|
234 |
|
235 |
-
|
236 |
-
|
237 |
-
|
238 |
-
python src/ocw/evaluate_only_connect.py \
|
239 |
-
--prediction-file "./predictions/task1.json" \
|
240 |
-
--dataset-path "./dataset/" \
|
241 |
-
--results-path "./results/" \
|
242 |
-
--task "task1-grouping"
|
243 |
-
```
|
244 |
-
|
245 |
-
### Downloading easy datasets for ablation studies
|
246 |
-
|
247 |
-
We also produced two "easy" versions of the dataset, designed to remove or dramatically reduce the number of red herrings, for abalation:
|
248 |
-
|
249 |
-
- A copy of the dataset where each wall in the test set is replaced with a _random_ selection of groups. No group is repeated twice, and no wall contains two copies of the same clue. The train and validation sets are unmodified. This dataset can be downloaded from [here](https://www.cs.toronto.edu/~taati/OCW/OCW_randomized.tar.gz) or with a bash script:
|
250 |
-
|
251 |
-
```bash
|
252 |
-
bash download_OCW_randomized.sh
|
253 |
-
```
|
254 |
-
- A copy of the dataset generated from WordNet by selecting equivalent synonyms for each clue in a group. This dataset can be downloaded from [here](https://www.cs.toronto.edu/~taati/OCW/OCW_wordnet.tar.gz) or with a bash script:
|
255 |
-
|
256 |
-
```bash
|
257 |
-
bash download_OCW_wordnet.sh
|
258 |
-
```
|
259 |
-
|
260 |
-
### Running the baselines
|
261 |
-
|
262 |
-
#### Word Embeddings and Pre-trained Language Models
|
263 |
-
|
264 |
-
To run word embeddings and PLM baseline:
|
265 |
-
|
266 |
-
```bash
|
267 |
-
python scripts/prediction.py \
|
268 |
-
--model-name "intfloat/e5-base-v2" \
|
269 |
-
--dataset-path "./dataset/" \
|
270 |
-
--predictions-path "./predictions/" \
|
271 |
-
--task "task1-grouping"
|
272 |
-
```
|
273 |
-
The `model_name` should be from huggingface model hub or in `['elmo', 'glove', 'crawl', 'news']`.
|
274 |
-
To run contextualized embeddings in PLMs, use `--contextual` flag.
|
275 |
-
|
276 |
-
To plot the results:
|
277 |
-
|
278 |
-
```bash
|
279 |
-
python scripts/plot.py \
|
280 |
-
--wall-id "8cde" \
|
281 |
-
--model-name "intfloat/e5-base-v2" \
|
282 |
-
--shuffle-seed 9
|
283 |
-
```
|
284 |
-
|
285 |
-
#### Large Language Models
|
286 |
-
|
287 |
-
To run the few-shot in-context LLM baseline, see the [`run_openai.ipynb`](./notebooks/run_openai.ipynb) notebook. Note: this will require an OpenAI API key.
|
288 |
-
|
289 |
-
## βοΈ Contributing
|
290 |
-
|
291 |
-
We welcome contributions to this repository (noticed a typo? a bug?). To propose a change:
|
292 |
-
|
293 |
-
```
|
294 |
-
git clone https://github.com/salavina/OCW
|
295 |
-
cd OCW
|
296 |
-
git checkout -b my-branch
|
297 |
-
pip install -r requirements.txt
|
298 |
-
pip install -e .
|
299 |
-
```
|
300 |
-
|
301 |
-
Once your changes are made, make sure to lint and format the code (addressing any warnings or errors):
|
302 |
|
303 |
-
|
304 |
-
|
305 |
-
black .
|
306 |
-
flake8 .
|
307 |
```
|
308 |
|
309 |
-
|
310 |
|
311 |
## π Citing
|
312 |
|
|
|
4 |
- text-classification
|
5 |
language:
|
6 |
- en
|
7 |
+
tags:
|
8 |
+
- creative problem solving
|
9 |
+
- puzzles
|
10 |
+
- fixation effect
|
11 |
+
- large language models
|
12 |
+
pretty_name: Only Connect Wall Dataset
|
13 |
+
size_categories:
|
14 |
+
- n<1K
|
15 |
---
|
16 |
+
# Only Connect Wall (OCW) Dataset
|
17 |
|
18 |
+
The Only Connect Wall (OCW) dataset contains 618 _"Connecting Walls"_ from the [Round 3: Connecting Wall](https://en.wikipedia.org/wiki/Only_Connect#Round_3:_Connecting_Wall) segment of the [Only Connect quiz show](https://en.wikipedia.org/wiki/Only_Connect), collected from 15 seasons' worth of episodes. Each wall contains the ground-truth __groups__ and __connections__ as well as recorded human performance. Please see [our paper](https://arxiv.org/abs/2306.11167) and [GitHub repo](https://github.com/TaatiTeam/OCW) for more details about the dataset and its motivations.
|
19 |
|
20 |
+
## Usage
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
21 |
|
22 |
```python
|
23 |
# pip install datasets
|
24 |
+
from datasets import load_dataset
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
25 |
|
26 |
+
dataset = load_dataset("TaatiTeam/OCW")
|
27 |
|
28 |
+
# The dataset can be used like any other HuggingFace dataset
|
29 |
+
# E.g. get the wall_id of the first example in the train set
|
30 |
+
dataset["train"]["wall_id"][0]
|
31 |
+
# or get the words of the first 10 examples in the test set
|
32 |
+
dataset["test"]["words"][0:10]
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
```
|
34 |
|
35 |
+
We also provide two different versions of the dataset where the red herrings in each wall have been significantly reduced (`ocw_randomized`) or removed altogether (`ocw_wordnet`) which can be loaded like:
|
36 |
|
37 |
+
```python
|
38 |
+
# pip install datasets
|
39 |
+
from datasets import load_dataset
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
40 |
|
41 |
+
ocw_randomized = load_dataset("TaatiTeam/OCW", "ocw_randomized")
|
42 |
+
ocw_wordnet = load_dataset("TaatiTeam/OCW", "ocw_wordnet")
|
|
|
|
|
43 |
```
|
44 |
|
45 |
+
See [our paper](https://arxiv.org/abs/2306.11167) for more details.
|
46 |
|
47 |
## π Citing
|
48 |
|