Oleg Somov commited on
Commit
1dff0e6
0 Parent(s):

first version of pauq

Browse files
Files changed (5) hide show
  1. .gitattributes +1 -0
  2. .gitignore +4 -0
  3. README.md +172 -0
  4. formatted_pauq.zip +3 -0
  5. pauq.py +176 -0
.gitattributes ADDED
@@ -0,0 +1 @@
 
 
1
+ formatted_pauq.zip filter=lfs diff=lfs merge=lfs -text
.gitignore ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ *.ipynb
2
+ .idea/
3
+ .ipynb_checkpoints/
4
+ .DS_Store
README.md ADDED
@@ -0,0 +1,172 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - expert-generated
6
+ - machine-generated
7
+ language:
8
+ - en
9
+ - ru
10
+ license:
11
+ - cc-by-4.0
12
+ multilinguality:
13
+ - monolingual
14
+ - multilingua;
15
+ size_categories:
16
+ - 1K<n<10K
17
+ source_datasets:
18
+ - spider
19
+ task_categories:
20
+ - text2text-generation
21
+ task_ids: []
22
+ pretty_name: Pauq
23
+ tags:
24
+ - text-to-sql
25
+ dataset_info:
26
+ features:
27
+ - name: db_id
28
+ dtype: string
29
+ - name: query
30
+ dtype: string
31
+ - name: question
32
+ dtype: string
33
+ - name: query_toks
34
+ sequence: string
35
+ - name: query_toks_no_value
36
+ sequence: string
37
+ - name: question_toks
38
+ sequence: string
39
+ config_name: pauq
40
+ splits:
41
+ - name: train
42
+ num_bytes:
43
+ num_examples:
44
+ - name: validation
45
+ num_bytes:
46
+ num_examples:
47
+ download_size:
48
+ dataset_size:
49
+ ---
50
+
51
+ # Dataset Card for Spider
52
+
53
+ ## Table of Contents
54
+ - [Dataset Description](#dataset-description)
55
+ - [Dataset Summary](#dataset-summary)
56
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
57
+ - [Languages](#languages)
58
+ - [Dataset Structure](#dataset-structure)
59
+ - [Data Instances](#data-instances)
60
+ - [Data Fields](#data-fields)
61
+ - [Data Splits](#data-splits)
62
+ - [Dataset Creation](#dataset-creation)
63
+ - [Curation Rationale](#curation-rationale)
64
+ - [Source Data](#source-data)
65
+ - [Annotations](#annotations)
66
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
67
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
68
+ - [Social Impact of Dataset](#social-impact-of-dataset)
69
+ - [Discussion of Biases](#discussion-of-biases)
70
+ - [Other Known Limitations](#other-known-limitations)
71
+ - [Additional Information](#additional-information)
72
+ - [Dataset Curators](#dataset-curators)
73
+ - [Licensing Information](#licensing-information)
74
+ - [Citation Information](#citation-information)
75
+ - [Contributions](#contributions)
76
+
77
+ ## Dataset Description
78
+
79
+ - **Homepage:**
80
+ - **Repository:**
81
+ - **Paper:**
82
+ - **Point of Contact:**
83
+
84
+ ### Dataset Summary
85
+
86
+
87
+ ### Supported Tasks and Leaderboards
88
+
89
+
90
+ ### Languages
91
+
92
+
93
+ ## Dataset Structure
94
+
95
+ ### Data Instances
96
+
97
+ **What do the instances that comprise the dataset represent?**
98
+
99
+ Each instance is natural language question and the equivalent SQL query
100
+
101
+ **How many instances are there in total?**
102
+
103
+ **What data does each instance consist of?**
104
+
105
+ [More Information Needed]
106
+
107
+ ### Data Fields
108
+
109
+ * **db_id**: Database name
110
+ * **question**: Natural language to interpret into SQL
111
+ * **query**: Target SQL query
112
+ * **query_toks**: List of tokens for the query
113
+ * **query_toks_no_value**: List of tokens for the query
114
+ * **question_toks**: List of tokens for the question
115
+
116
+ ### Data Splits
117
+
118
+ [More Information Needed]
119
+
120
+ ## Dataset Creation
121
+
122
+ ### Curation Rationale
123
+
124
+ [More Information Needed]
125
+
126
+ ### Source Data
127
+
128
+ #### Initial Data Collection and Normalization
129
+
130
+ #### Who are the source language producers?
131
+
132
+ [More Information Needed]
133
+
134
+ ### Annotations
135
+
136
+ #### Annotation process
137
+
138
+ #### Who are the annotators?
139
+
140
+ ### Personal and Sensitive Information
141
+
142
+ [More Information Needed]
143
+
144
+ ## Considerations for Using the Data
145
+
146
+ ### Social Impact of Dataset
147
+
148
+ ### Discussion of Biases
149
+
150
+ [More Information Needed]
151
+
152
+ ### Other Known Limitations
153
+
154
+ ## Additional Information
155
+
156
+ The listed authors in the homepage are maintaining/supporting the dataset.
157
+
158
+ ### Dataset Curators
159
+
160
+ [More Information Needed]
161
+
162
+ ### Licensing Information
163
+
164
+ The spider dataset is licensed under
165
+ the [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/legalcode)
166
+
167
+ [More Information Needed]
168
+
169
+ ### Citation Information
170
+
171
+
172
+ ### Contributions
formatted_pauq.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:10a3c6adf4f65df322c855607fb1f10c5a94f3e5b441d4928d8f57288742efaf
3
+ size 308248709
pauq.py ADDED
@@ -0,0 +1,176 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """PAUQ: Text-to-SQL in Russian"""
16
+
17
+
18
+ import json
19
+ import os
20
+
21
+ import datasets
22
+
23
+
24
+ logger = datasets.logging.get_logger(__name__)
25
+
26
+
27
+ _CITATION = """\
28
+ @inproceedings{bakshandaeva-etal-2022-pauq,
29
+ title = "{PAUQ}: Text-to-{SQL} in {R}ussian",
30
+ author = "Bakshandaeva, Daria and
31
+ Somov, Oleg and
32
+ Dmitrieva, Ekaterina and
33
+ Davydova, Vera and
34
+ Tutubalina, Elena",
35
+ booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022",
36
+ month = dec,
37
+ year = "2022",
38
+ address = "Abu Dhabi, United Arab Emirates",
39
+ publisher = "Association for Computational Linguistics",
40
+ url = "https://aclanthology.org/2022.findings-emnlp.175",
41
+ """
42
+
43
+ _DESCRIPTION = """\
44
+ Pauq is a first Russian text-to-SQL dataset translated from original Spider dataset
45
+ with corrections and refinements of question, queries and databases.
46
+ """
47
+
48
+ _LICENSE = "CC BY-SA 4.0"
49
+
50
+ _URL = "https://huggingface.co/datasets/pauq/formatted_pauq.zip"
51
+
52
+
53
+ class Pauq(datasets.GeneratorBasedBuilder):
54
+ VERSION = datasets.Version("1.0.0")
55
+
56
+ BUILDER_CONFIGS = [
57
+ datasets.BuilderConfig(
58
+ name="ru_pauq_tl",
59
+ version=VERSION,
60
+ description=_DESCRIPTION,
61
+ ),
62
+ datasets.BuilderConfig(
63
+ name="en_pauq_tl",
64
+ version=VERSION,
65
+ description=_DESCRIPTION,
66
+ ),
67
+ datasets.BuilderConfig(
68
+ name="ru_pauq_iid",
69
+ version=VERSION,
70
+ description=_DESCRIPTION,
71
+ ),
72
+ datasets.BuilderConfig(
73
+ name="en_pauq_iid",
74
+ version=VERSION,
75
+ description=_DESCRIPTION,
76
+ ),
77
+ ]
78
+
79
+ def _info(self):
80
+ features = datasets.Features(
81
+ {
82
+ "id": datasets.Value("string"),
83
+ "db_id": datasets.Value("string"),
84
+ "source": datasets.Value("string"),
85
+ "type": datasets.Value("string"),
86
+ "question": datasets.Value("string"),
87
+ "query": datasets.Value("string"),
88
+ "sql": datasets.features.Sequence(datasets.Value("string")),
89
+ "question_toks": datasets.features.Sequence(datasets.Value("string")),
90
+ "query_toks": datasets.features.Sequence(datasets.Value("string")),
91
+ "query_toks_no_value": datasets.features.Sequence(datasets.Value("string")),
92
+ "masked_query": datasets.Value("string")
93
+ }
94
+ )
95
+ return datasets.DatasetInfo(
96
+ description=_DESCRIPTION,
97
+ features=features,
98
+ supervised_keys=None,
99
+ homepage=_HOMEPAGE,
100
+ license=_LICENSE,
101
+ citation=_CITATION,
102
+ )
103
+
104
+ def _split_generators(self, dl_manager):
105
+ downloaded_filepath = dl_manager.download_and_extract(_URL)
106
+
107
+ return [
108
+ datasets.SplitGenerator(
109
+ name=datasets.Split.TRAIN,
110
+ gen_kwargs={
111
+ "data_filepath": os.path.join(downloaded_filepath, "splits/ru_iid_train.json"),
112
+ },
113
+ ),
114
+ datasets.SplitGenerator(
115
+ name=datasets.Split.TEST,
116
+ gen_kwargs={
117
+ "data_filepath": os.path.join(downloaded_filepath, "splits/ru_iid_test.json"),
118
+ },
119
+ ),
120
+ datasets.SplitGenerator(
121
+ name=datasets.Split.TRAIN,
122
+ gen_kwargs={
123
+ "data_filepath": os.path.join(downloaded_filepath, "splits/ru_tl_train.json"),
124
+ },
125
+ ),
126
+ datasets.SplitGenerator(
127
+ name=datasets.Split.TEST,
128
+ gen_kwargs={
129
+ "data_filepath": os.path.join(downloaded_filepath, "splits/ru_tl_est.json"),
130
+ },
131
+ ),
132
+ datasets.SplitGenerator(
133
+ name=datasets.Split.TRAIN,
134
+ gen_kwargs={
135
+ "data_filepath": os.path.join(downloaded_filepath, "splits/en_iid_train.json"),
136
+ },
137
+ ),
138
+ datasets.SplitGenerator(
139
+ name=datasets.Split.TEST,
140
+ gen_kwargs={
141
+ "data_filepath": os.path.join(downloaded_filepath, "splits/en_iid_test.json"),
142
+ },
143
+ ),
144
+ datasets.SplitGenerator(
145
+ name=datasets.Split.TRAIN,
146
+ gen_kwargs={
147
+ "data_filepath": os.path.join(downloaded_filepath, "splits/en_tl_train.json"),
148
+ },
149
+ ),
150
+ datasets.SplitGenerator(
151
+ name=datasets.Split.TEST,
152
+ gen_kwargs={
153
+ "data_filepath": os.path.join(downloaded_filepath, "splits/en_tl_test.json"),
154
+ },
155
+ ),
156
+ ]
157
+
158
+ def _generate_examples(self, data_filepath):
159
+ """This function returns the examples in the raw (text) form."""
160
+ logger.info("generating examples from = %s", data_filepath)
161
+ with open(data_filepath, encoding="utf-8") as f:
162
+ pauq = json.load(f)
163
+ for idx, sample in enumerate(pauq):
164
+ yield idx, {
165
+ "id": sample["id"],
166
+ "db_id": sample["db_id"],
167
+ "source": sample["source"],
168
+ "type": sample["type"],
169
+ "query": sample["query"],
170
+ "sql": datasets.Value("string"),
171
+ "question": sample["question"],
172
+ "question_toks": sample["question_toks"],
173
+ "query_toks": sample["query_toks"],
174
+ "query_toks_no_value": sample["query_toks_no_value"],
175
+ "masked_query": sample["masked_query"]
176
+ }