Datasets:
skt
/

Modalities:
Text
Formats:
json
Languages:
Korean
ArXiv:
Libraries:
Datasets
pandas
License:
parquet-converter commited on
Commit
5fbd409
1 Parent(s): 46d3e24

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,38 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bin.* filter=lfs diff=lfs merge=lfs -text
5
- *.bz2 filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.onnx filter=lfs diff=lfs merge=lfs -text
14
- *.ot filter=lfs diff=lfs merge=lfs -text
15
- *.parquet filter=lfs diff=lfs merge=lfs -text
16
- *.pb filter=lfs diff=lfs merge=lfs -text
17
- *.pt filter=lfs diff=lfs merge=lfs -text
18
- *.pth filter=lfs diff=lfs merge=lfs -text
19
- *.rar filter=lfs diff=lfs merge=lfs -text
20
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
- *.tar.* filter=lfs diff=lfs merge=lfs -text
22
- *.tflite filter=lfs diff=lfs merge=lfs -text
23
- *.tgz filter=lfs diff=lfs merge=lfs -text
24
- *.wasm filter=lfs diff=lfs merge=lfs -text
25
- *.xz filter=lfs diff=lfs merge=lfs -text
26
- *.zip filter=lfs diff=lfs merge=lfs -text
27
- *.zstandard filter=lfs diff=lfs merge=lfs -text
28
- *tfevents* filter=lfs diff=lfs merge=lfs -text
29
- # Audio files - uncompressed
30
- *.pcm filter=lfs diff=lfs merge=lfs -text
31
- *.sam filter=lfs diff=lfs merge=lfs -text
32
- *.raw filter=lfs diff=lfs merge=lfs -text
33
- # Audio files - compressed
34
- *.aac filter=lfs diff=lfs merge=lfs -text
35
- *.flac filter=lfs diff=lfs merge=lfs -text
36
- *.mp3 filter=lfs diff=lfs merge=lfs -text
37
- *.ogg filter=lfs diff=lfs merge=lfs -text
38
- *.wav filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,246 +0,0 @@
1
- ---
2
- pretty_name: KoBEST
3
- annotations_creators:
4
- - expert-generated
5
- language_creators:
6
- - expert-generated
7
- language:
8
- - ko
9
- license:
10
- - cc-by-sa-4.0
11
- multilinguality:
12
- - monolingual
13
- size_categories:
14
- - 10K<n<100K
15
- source_datasets:
16
- - original
17
- ---
18
-
19
- # Dataset Card for KoBEST
20
-
21
- ## Table of Contents
22
- - [Table of Contents](#table-of-contents)
23
- - [Dataset Description](#dataset-description)
24
- - [Dataset Summary](#dataset-summary)
25
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
26
- - [Languages](#languages)
27
- - [Dataset Structure](#dataset-structure)
28
- - [Data Instances](#data-instances)
29
- - [Data Fields](#data-fields)
30
- - [Data Splits](#data-splits)
31
- - [Dataset Creation](#dataset-creation)
32
- - [Curation Rationale](#curation-rationale)
33
- - [Source Data](#source-data)
34
- - [Annotations](#annotations)
35
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
36
- - [Considerations for Using the Data](#considerations-for-using-the-data)
37
- - [Social Impact of Dataset](#social-impact-of-dataset)
38
- - [Discussion of Biases](#discussion-of-biases)
39
- - [Other Known Limitations](#other-known-limitations)
40
- - [Additional Information](#additional-information)
41
- - [Dataset Curators](#dataset-curators)
42
- - [Licensing Information](#licensing-information)
43
- - [Citation Information](#citation-information)
44
- - [Contributions](#contributions)
45
-
46
- ## Dataset Description
47
-
48
- - **Repository:** https://github.com/SKT-LSL/KoBEST_datarepo
49
- - **Paper:**
50
- - **Point of Contact:** https://github.com/SKT-LSL/KoBEST_datarepo/issues
51
-
52
- ### Dataset Summary
53
-
54
- KoBEST is a Korean benchmark suite consists of 5 natural language understanding tasks that requires advanced knowledge in Korean.
55
-
56
- ### Supported Tasks and Leaderboards
57
-
58
- Boolean Question Answering, Choice of Plausible Alternatives, Words-in-Context, HellaSwag, Sentiment Negation Recognition
59
-
60
- ### Languages
61
-
62
- `ko-KR`
63
-
64
- ## Dataset Structure
65
-
66
- ### Data Instances
67
-
68
- #### KB-BoolQ
69
- An example of a data point looks as follows.
70
- ```
71
- {'paragraph': '두아 리파(Dua Lipa, 1995년 8월 22일 ~ )는 잉글랜드의 싱어송라이터, 모델이다. BBC 사운드 오브 2016 명단에 노미닛되었다. 싱글 "Be the One"가 영국 싱글 차트 9위까지 오르는 등 성과를 보여주었다.',
72
- 'question': '두아 리파는 영국인인가?',
73
- 'label': 1}
74
- ```
75
-
76
- #### KB-COPA
77
- An example of a data point looks as follows.
78
- ```
79
- {'premise': '물을 오래 끓였다.',
80
- 'question': '결과',
81
- 'alternative_1': '물의 양이 늘어났다.',
82
- 'alternative_2': '물의 양이 줄어들었다.',
83
- 'label': 1}
84
- ```
85
-
86
- #### KB-WiC
87
- An example of a data point looks as follows.
88
- ```
89
- {'word': '양분',
90
- 'context_1': '토양에 [양분]이 풍부하여 나무가 잘 자란다. ',
91
- 'context_2': '태아는 모체로부터 [양분]과 산소를 공급받게 된다.',
92
- 'label': 1}
93
- ```
94
-
95
- #### KB-HellaSwag
96
- An example of a data point looks as follows.
97
- ```
98
- {'context': '모자를 쓴 투수가 타자에게 온 힘을 다해 공을 던진다. 공이 타자에게 빠른 속도로 다가온다. 타자가 공을 배트로 친다. 배트에서 깡 소리가 난다. 공이 하늘 위로 날아간다.',
99
- 'ending_1': '외야수가 떨어지는 공을 글러브로 잡는다.',
100
- 'ending_2': '외야수가 공이 떨어질 위치에 자리를 잡는다.',
101
- 'ending_3': '심판이 아웃을 외친다.',
102
- 'ending_4': '외야수가 공을 따라 뛰기 시작한다.',
103
- 'label': 3}
104
- ```
105
-
106
- #### KB-SentiNeg
107
- An example of a data point looks as follows.
108
- ```
109
- {'sentence': '택배사 정말 마음에 듬',
110
- 'label': 1}
111
- ```
112
-
113
- ### Data Fields
114
-
115
- ### KB-BoolQ
116
- + `paragraph`: a `string` feature
117
- + `question`: a `string` feature
118
- + `label`: a classification label, with possible values `False`(0) and `True`(1)
119
-
120
-
121
- ### KB-COPA
122
- + `premise`: a `string` feature
123
- + `question`: a `string` feature
124
- + `alternative_1`: a `string` feature
125
- + `alternative_2`: a `string` feature
126
- + `label`: an answer candidate label, with possible values `alternative_1`(0) and `alternative_2`(1)
127
-
128
-
129
- ### KB-WiC
130
- + `target_word`: a `string` feature
131
- + `context_1`: a `string` feature
132
- + `context_2`: a `string` feature
133
- + `label`: a classification label, with possible values `False`(0) and `True`(1)
134
-
135
- ### KB-HellaSwag
136
- + `target_word`: a `string` feature
137
- + `context_1`: a `string` feature
138
- + `context_2`: a `string` feature
139
- + `label`: a classification label, with possible values `False`(0) and `True`(1)
140
-
141
- ### KB-SentiNeg
142
- + `sentence`: a `string` feature
143
- + `label`: a classification label, with possible values `Negative`(0) and `Positive`(1)
144
-
145
-
146
- ### Data Splits
147
-
148
- #### KB-BoolQ
149
-
150
- + train: 3,665
151
- + dev: 700
152
- + test: 1,404
153
-
154
- #### KB-COPA
155
-
156
- + train: 3,076
157
- + dev: 1,000
158
- + test: 1,000
159
-
160
- #### KB-WiC
161
-
162
- + train: 3,318
163
- + dev: 1,260
164
- + test: 1,260
165
-
166
- #### KB-HellaSwag
167
-
168
- + train: 3,665
169
- + dev: 700
170
- + test: 1,404
171
-
172
- #### KB-SentiNeg
173
-
174
- + train: 3,649
175
- + dev: 400
176
- + test: 397
177
- + test_originated: 397 (Corresponding training data where the test set is originated from.)
178
-
179
- ## Dataset Creation
180
-
181
- ### Curation Rationale
182
-
183
- [More Information Needed]
184
-
185
- ### Source Data
186
-
187
- #### Initial Data Collection and Normalization
188
-
189
- [More Information Needed]
190
-
191
- #### Who are the source language producers?
192
-
193
- [More Information Needed]
194
-
195
- ### Annotations
196
-
197
- #### Annotation process
198
-
199
- [More Information Needed]
200
-
201
- #### Who are the annotators?
202
-
203
- [More Information Needed]
204
-
205
- ### Personal and Sensitive Information
206
-
207
- [More Information Needed]
208
-
209
- ## Considerations for Using the Data
210
-
211
- ### Social Impact of Dataset
212
-
213
- [More Information Needed]
214
-
215
- ### Discussion of Biases
216
-
217
- [More Information Needed]
218
-
219
- ### Other Known Limitations
220
-
221
- [More Information Needed]
222
-
223
- ## Additional Information
224
-
225
- ### Dataset Curators
226
-
227
- [More Information Needed]
228
-
229
- ### Licensing Information
230
-
231
- ```
232
- @misc{https://doi.org/10.48550/arxiv.2204.04541,
233
- doi = {10.48550/ARXIV.2204.04541},
234
- url = {https://arxiv.org/abs/2204.04541},
235
- author = {Kim, Dohyeong and Jang, Myeongjun and Kwon, Deuk Sin and Davis, Eric},
236
- title = {KOBEST: Korean Balanced Evaluation of Significant Tasks},
237
- publisher = {arXiv},
238
- year = {2022},
239
- }
240
- ```
241
-
242
- [More Information Needed]
243
-
244
- ### Contributions
245
-
246
- Thanks to [@MJ-Jang](https://github.com/MJ-Jang) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
boolq/kobest_v1-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b65527c2733e465ee0f90b90e5a1d5ede31830b1d72765eceacbe52e69ec3315
3
+ size 477214
boolq/kobest_v1-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fe899aa989197bfd27fbb29bf24a2f37aeb971053a5c18051405a39ef5079c33
3
+ size 1274217
boolq/kobest_v1-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5941226162548d1066812f9f9fdeac5f5ef69b8abe36f561c0803c3c48455259
3
+ size 241025
copa/kobest_v1-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2778c6504e7dfc7a24dfdf7b36fa2eb4f5c4a8e01575b24bb9f6d47ea79ea31f
3
+ size 74395
copa/kobest_v1-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9cefb04918a09bbdc4187ce46521d7e3fc8fac2de1a0049c9c0784dde1b66d12
3
+ size 223103
copa/kobest_v1-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6c2e76702a6b665b37710d52e170841fb98d3c3ad54e8c3ee491e43a7e98fc8c
3
+ size 39447
dataset_infos.json DELETED
@@ -1,224 +0,0 @@
1
- {
2
- "boolq": {
3
- "description": " Korean Balanced Evaluation of Significant Tasks Benchmark\n",
4
- "citation": " TBD\n",
5
- "homepage": "https://github.com/SKT-LSL/KoBEST_datarepo",
6
- "license": "",
7
- "features": {
8
- "paragraph": {
9
- "dtype": "string",
10
- "id": null,
11
- "_type": "Value"
12
- },
13
- "question": {
14
- "dtype": "string",
15
- "id": null,
16
- "_type": "Value"
17
- },
18
- "label": {
19
- "num_classes": 2,
20
- "names": [
21
- "False",
22
- "True"
23
- ],
24
- "names_file": null,
25
- "id": null,
26
- "_type": "ClassLabel"
27
- }
28
- },
29
- "post_processed": null,
30
- "supervised_keys": null,
31
- "builder_name": "kobest_v1",
32
- "config_name": "boolq",
33
- "version": {
34
- "version_str": "1.0.0",
35
- "description": "",
36
- "major": 1,
37
- "minor": 0,
38
- "patch": 0
39
- }
40
- },
41
- "copa": {
42
- "description": " Korean Balanced Evaluation of Significant Tasks Benchmark\n",
43
- "citation": " TBD\n",
44
- "homepage": "https://github.com/SKT-LSL/KoBEST_datarepo",
45
- "license": "",
46
- "features": {
47
- "premise": {
48
- "dtype": "string",
49
- "id": null,
50
- "_type": "Value"
51
- },
52
- "question": {
53
- "dtype": "string",
54
- "id": null,
55
- "_type": "Value"
56
- },
57
- "alternative_1": {
58
- "dtype": "string",
59
- "id": null,
60
- "_type": "Value"
61
- },
62
- "alternative_2": {
63
- "dtype": "string",
64
- "id": null,
65
- "_type": "Value"
66
- },
67
- "label": {
68
- "num_classes": 2,
69
- "names": [
70
- "alternative_1",
71
- "alternative_2"
72
- ],
73
- "names_file": null,
74
- "id": null,
75
- "_type": "ClassLabel"
76
- }
77
- },
78
- "post_processed": null,
79
- "supervised_keys": null,
80
- "builder_name": "kobest_v1",
81
- "config_name": "copa",
82
- "version": {
83
- "version_str": "1.0.0",
84
- "description": "",
85
- "major": 1,
86
- "minor": 0,
87
- "patch": 0
88
- }
89
- },
90
- "wic": {
91
- "description": " Korean Balanced Evaluation of Significant Tasks Benchmark\n",
92
- "citation": " TBD\n",
93
- "homepage": "https://github.com/SKT-LSL/KoBEST_datarepo",
94
- "license": "",
95
- "features": {
96
- "word": {
97
- "dtype": "string",
98
- "id": null,
99
- "_type": "Value"
100
- },
101
- "context_1": {
102
- "dtype": "string",
103
- "id": null,
104
- "_type": "Value"
105
- },
106
- "context_2": {
107
- "dtype": "string",
108
- "id": null,
109
- "_type": "Value"
110
- },
111
- "label": {
112
- "num_classes": 2,
113
- "names": [
114
- "False",
115
- "True"
116
- ],
117
- "names_file": null,
118
- "id": null,
119
- "_type": "ClassLabel"
120
- }
121
- },
122
- "post_processed": null,
123
- "supervised_keys": null,
124
- "builder_name": "kobest_v1",
125
- "config_name": "copa",
126
- "version": {
127
- "version_str": "1.0.0",
128
- "description": "",
129
- "major": 1,
130
- "minor": 0,
131
- "patch": 0
132
- }
133
- },
134
- "hellaswag": {
135
- "description": " Korean Balanced Evaluation of Significant Tasks Benchmark\n",
136
- "citation": " TBD\n",
137
- "homepage": "https://github.com/SKT-LSL/KoBEST_datarepo",
138
- "license": "",
139
- "features": {
140
- "context": {
141
- "dtype": "string",
142
- "id": null,
143
- "_type": "Value"
144
- },
145
- "ending_1": {
146
- "dtype": "string",
147
- "id": null,
148
- "_type": "Value"
149
- },
150
- "ending_2": {
151
- "dtype": "string",
152
- "id": null,
153
- "_type": "Value"
154
- },
155
- "ending_3": {
156
- "dtype": "string",
157
- "id": null,
158
- "_type": "Value"
159
- },
160
- "ending_4": {
161
- "dtype": "string",
162
- "id": null,
163
- "_type": "Value"
164
- },
165
- "label": {
166
- "num_classes": 4,
167
- "names": [
168
- "ending_1",
169
- "ending_2",
170
- "ending_3",
171
- "ending_4"
172
- ],
173
- "names_file": null,
174
- "id": null,
175
- "_type": "ClassLabel"
176
- }
177
- },
178
- "post_processed": null,
179
- "supervised_keys": null,
180
- "builder_name": "kobest_v1",
181
- "config_name": "copa",
182
- "version": {
183
- "version_str": "1.0.0",
184
- "description": "",
185
- "major": 1,
186
- "minor": 0,
187
- "patch": 0
188
- }
189
- },
190
- "sentineg": {
191
- "description": " Korean Balanced Evaluation of Significant Tasks Benchmark\n",
192
- "citation": " TBD\n",
193
- "homepage": "https://github.com/SKT-LSL/KoBEST_datarepo",
194
- "license": "",
195
- "features": {
196
- "sentence": {
197
- "dtype": "string",
198
- "id": null,
199
- "_type": "Value"
200
- },
201
- "label": {
202
- "num_classes": 2,
203
- "names": [
204
- "negative",
205
- "positive"
206
- ],
207
- "names_file": null,
208
- "id": null,
209
- "_type": "ClassLabel"
210
- }
211
- },
212
- "post_processed": null,
213
- "supervised_keys": null,
214
- "builder_name": "kobest_v1",
215
- "config_name": "copa",
216
- "version": {
217
- "version_str": "1.0.0",
218
- "description": "",
219
- "major": 1,
220
- "minor": 0,
221
- "patch": 0
222
- }
223
- }
224
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
hellaswag/kobest_v1-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:454a4b2fa2b0e883bc3c5c24b461f9a170951f27caac01bb5b6a3c5bbacf3090
3
+ size 187769
hellaswag/kobest_v1-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:db0716093214001c5cb3a60e9446bb8e6a5f33294ca4a49f367143da696a1213
3
+ size 702109
hellaswag/kobest_v1-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7de29f4d77ca87a5c8b39a2a49f76d98d977e93f75cb0346821ed60255e0f9d7
3
+ size 183618
kobest_v1.py DELETED
@@ -1,241 +0,0 @@
1
- """Korean Balanced Evaluation of Significant Tasks"""
2
-
3
-
4
- import csv
5
-
6
- import pandas as pd
7
-
8
- import datasets
9
-
10
-
11
- _CITATAION = """\
12
- @misc{https://doi.org/10.48550/arxiv.2204.04541,
13
- doi = {10.48550/ARXIV.2204.04541},
14
- url = {https://arxiv.org/abs/2204.04541},
15
- author = {Kim, Dohyeong and Jang, Myeongjun and Kwon, Deuk Sin and Davis, Eric},
16
- title = {KOBEST: Korean Balanced Evaluation of Significant Tasks},
17
- publisher = {arXiv},
18
- year = {2022},
19
- }
20
- """
21
-
22
- _DESCRIPTION = """\
23
- The dataset contains data for KoBEST dataset
24
- """
25
-
26
- _URL = "https://github.com/SKT-LSL/KoBEST_datarepo/raw/main"
27
-
28
-
29
- _DATA_URLS = {
30
- "boolq": {
31
- "train": _URL + "/v1.0/BoolQ/train.tsv",
32
- "dev": _URL + "/v1.0/BoolQ/dev.tsv",
33
- "test": _URL + "/v1.0/BoolQ/test.tsv",
34
- },
35
- "copa": {
36
- "train": _URL + "/v1.0/COPA/train.tsv",
37
- "dev": _URL + "/v1.0/COPA/dev.tsv",
38
- "test": _URL + "/v1.0/COPA/test.tsv",
39
- },
40
- "sentineg": {
41
- "train": _URL + "/v1.0/SentiNeg/train.tsv",
42
- "dev": _URL + "/v1.0/SentiNeg/dev.tsv",
43
- "test": _URL + "/v1.0/SentiNeg/test.tsv",
44
- "test_originated": _URL + "/v1.0/SentiNeg/test.tsv",
45
- },
46
- "hellaswag": {
47
- "train": _URL + "/v1.0/HellaSwag/train.tsv",
48
- "dev": _URL + "/v1.0/HellaSwag/dev.tsv",
49
- "test": _URL + "/v1.0/HellaSwag/test.tsv",
50
- },
51
- "wic": {
52
- "train": _URL + "/v1.0/WiC/train.tsv",
53
- "dev": _URL + "/v1.0/WiC/dev.tsv",
54
- "test": _URL + "/v1.0/WiC/test.tsv",
55
- },
56
- }
57
-
58
- _LICENSE = "CC-BY-SA-4.0"
59
-
60
-
61
- class KoBESTConfig(datasets.BuilderConfig):
62
- """Config for building KoBEST"""
63
-
64
- def __init__(self, description, data_url, citation, url, **kwargs):
65
- """
66
- Args:
67
- description: `string`, brief description of the dataset
68
- data_url: `dictionary`, dict with url for each split of data.
69
- citation: `string`, citation for the dataset.
70
- url: `string`, url for information about the dataset.
71
- **kwrags: keyword arguments frowarded to super
72
- """
73
- super(KoBESTConfig, self).__init__(version=datasets.Version("1.0.0", ""), **kwargs)
74
- self.description = description
75
- self.data_url = data_url
76
- self.citation = citation
77
- self.url = url
78
-
79
-
80
- class KoBEST(datasets.GeneratorBasedBuilder):
81
- BUILDER_CONFIGS = [
82
- KoBESTConfig(name=name, description=_DESCRIPTION, data_url=_DATA_URLS[name], citation=_CITATAION, url=_URL)
83
- for name in ["boolq", "copa", 'sentineg', 'hellaswag', 'wic']
84
- ]
85
- BUILDER_CONFIG_CLASS = KoBESTConfig
86
-
87
- def _info(self):
88
- features = {}
89
- if self.config.name == "boolq":
90
- labels = ["False", "True"]
91
- features["paragraph"] = datasets.Value("string")
92
- features["question"] = datasets.Value("string")
93
- features["label"] = datasets.features.ClassLabel(names=labels)
94
-
95
- if self.config.name == "copa":
96
- labels = ["alternative_1", "alternative_2"]
97
- features["premise"] = datasets.Value("string")
98
- features["question"] = datasets.Value("string")
99
- features["alternative_1"] = datasets.Value("string")
100
- features["alternative_2"] = datasets.Value("string")
101
- features["label"] = datasets.features.ClassLabel(names=labels)
102
-
103
- if self.config.name == "wic":
104
- labels = ["False", "True"]
105
- features["word"] = datasets.Value("string")
106
- features["context_1"] = datasets.Value("string")
107
- features["context_2"] = datasets.Value("string")
108
- features["label"] = datasets.features.ClassLabel(names=labels)
109
-
110
- if self.config.name == "hellaswag":
111
- labels = ["ending_1", "ending_2", "ending_3", "ending_4"]
112
-
113
- features["context"] = datasets.Value("string")
114
- features["ending_1"] = datasets.Value("string")
115
- features["ending_2"] = datasets.Value("string")
116
- features["ending_3"] = datasets.Value("string")
117
- features["ending_4"] = datasets.Value("string")
118
- features["label"] = datasets.features.ClassLabel(names=labels)
119
-
120
- if self.config.name == "sentineg":
121
- labels = ["negative", "positive"]
122
- features["sentence"] = datasets.Value("string")
123
- features["label"] = datasets.features.ClassLabel(names=labels)
124
-
125
- return datasets.DatasetInfo(
126
- description=_DESCRIPTION, features=datasets.Features(features), homepage=_URL, citation=_CITATAION
127
- )
128
-
129
- def _split_generators(self, dl_manager):
130
-
131
- train = dl_manager.download_and_extract(self.config.data_url["train"])
132
- dev = dl_manager.download_and_extract(self.config.data_url["dev"])
133
- test = dl_manager.download_and_extract(self.config.data_url["test"])
134
-
135
- if self.config.data_url.get("test_originated"):
136
- test_originated = dl_manager.download_and_extract(self.config.data_url["test_originated"])
137
-
138
- return [
139
- datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": train, "split": "train"}),
140
- datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": dev, "split": "dev"}),
141
- datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": test, "split": "test"}),
142
- datasets.SplitGenerator(name="test_originated", gen_kwargs={"filepath": test_originated, "split": "test_originated"}),
143
- ]
144
-
145
- return [
146
- datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": train, "split": "train"}),
147
- datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": dev, "split": "dev"}),
148
- datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": test, "split": "test"}),
149
- ]
150
-
151
- def _generate_examples(self, filepath, split):
152
- if self.config.name == "boolq":
153
- df = pd.read_csv(filepath, sep="\t")
154
- df = df.dropna()
155
- df = df[['Text', 'Question', 'Answer']]
156
-
157
- df = df.rename(columns={
158
- 'Text': 'paragraph',
159
- 'Question': 'question',
160
- 'Answer': 'label',
161
- })
162
- df['label'] = [0 if str(s) == 'False' else 1 for s in df['label'].tolist()]
163
-
164
- elif self.config.name == "copa":
165
- df = pd.read_csv(filepath, sep="\t")
166
- df = df.dropna()
167
- df = df[['sentence', 'question', '1', '2', 'Answer']]
168
-
169
- df = df.rename(columns={
170
- 'sentence': 'premise',
171
- 'question': 'question',
172
- '1': 'alternative_1',
173
- '2': 'alternative_2',
174
- 'Answer': 'label',
175
- })
176
- df['label'] = [i-1 for i in df['label'].tolist()]
177
-
178
- elif self.config.name == "wic":
179
- df = pd.read_csv(filepath, sep="\t")
180
- df = df.dropna()
181
- df = df[['Target', 'SENTENCE1', 'SENTENCE2', 'ANSWER']]
182
-
183
- df = df.rename(columns={
184
- 'Target': 'word',
185
- 'SENTENCE1': 'context_1',
186
- 'SENTENCE2': 'context_2',
187
- 'ANSWER': 'label',
188
- })
189
- df['label'] = [0 if str(s) == 'False' else 1 for s in df['label'].tolist()]
190
-
191
- elif self.config.name == "hellaswag":
192
- df = pd.read_csv(filepath, sep="\t")
193
- df = df.dropna()
194
- df = df[['context', 'choice1', 'choice2', 'choice3', 'choice4', 'label']]
195
-
196
- df = df.rename(columns={
197
- 'context': 'context',
198
- 'choice1': 'ending_1',
199
- 'choice2': 'ending_2',
200
- 'choice3': 'ending_3',
201
- 'choice4': 'ending_4',
202
- 'label': 'label',
203
- })
204
-
205
- elif self.config.name == "sentineg":
206
- df = pd.read_csv(filepath, sep="\t")
207
- df = df.dropna()
208
-
209
- if split == "test_originated":
210
- df = df[['Text_origin', 'Label_origin']]
211
-
212
- df = df.rename(columns={
213
- 'Text_origin': 'sentence',
214
- 'Label_origin': 'label',
215
- })
216
- else:
217
- df = df[['Text', 'Label']]
218
-
219
- df = df.rename(columns={
220
- 'Text': 'sentence',
221
- 'Label': 'label',
222
- })
223
-
224
- else:
225
- raise NotImplementedError
226
-
227
- for id_, row in df.iterrows():
228
- features = {key: row[key] for key in row.keys()}
229
- yield id_, features
230
-
231
-
232
- if __name__ == "__main__":
233
- dataset = datasets.load_dataset("kobest_v1.py", 'sentineg', ignore_verifications=True)
234
- ds = dataset['test_originated']
235
- print(ds)
236
-
237
- # for task in ['boolq', 'copa', 'wic', 'hellaswag', 'sentineg']:
238
- # dataset = datasets.load_dataset("kobest_v1.py", task, ignore_verifications=True)
239
- # print(dataset)
240
- # print(dataset['train']['label'])
241
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
sentineg/kobest_v1-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:06a5c4340b781eb4c90cc572a718ce5b1716cc9cdcc56286feb0f4bffa697027
3
+ size 12631
sentineg/kobest_v1-test_originated.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eb71fa250670ca3fbc0bd48bc66df54f936ab98008d0281d3886562057d37a8b
3
+ size 12645
sentineg/kobest_v1-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:53ea7b58cd258e0a7e85bf6460f352a13210f8e05146932e8a8a22349a581f84
3
+ size 117419
sentineg/kobest_v1-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e680c186b8225d75569c6ddb68275aec33c7748150f8711f50830db5ec74c4c5
3
+ size 14504
wic/kobest_v1-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3c791e162147b3dc842d86ead4f70880b7ce51e1e6ac8e0a2c0536f93bf20f64
3
+ size 137665
wic/kobest_v1-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a2ee4eda73dd19b252362f130feeaab9c2937da9c6ee0968713d8c0cf9c55144
3
+ size 369509
wic/kobest_v1-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ab596a6fc6382b8329a8ba3093fb15bbe950bd4c17af7b746f2535597456059e
3
+ size 70894