Oleg Somov commited on
Commit
3a298f7
2 Parent(s): 41cafeb 1dff0e6

merge with main

Browse files
Files changed (5) hide show
  1. .gitattributes +1 -0
  2. .gitignore +4 -0
  3. README.md +174 -0
  4. formatted_pauq.zip +3 -0
  5. pauq.py +176 -0
.gitattributes CHANGED
@@ -53,3 +53,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
53
  *.jpg filter=lfs diff=lfs merge=lfs -text
54
  *.jpeg filter=lfs diff=lfs merge=lfs -text
55
  *.webp filter=lfs diff=lfs merge=lfs -text
 
 
53
  *.jpg filter=lfs diff=lfs merge=lfs -text
54
  *.jpeg filter=lfs diff=lfs merge=lfs -text
55
  *.webp filter=lfs diff=lfs merge=lfs -text
56
+ formatted_pauq.zip filter=lfs diff=lfs merge=lfs -text
.gitignore ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ *.ipynb
2
+ .idea/
3
+ .ipynb_checkpoints/
4
+ .DS_Store
README.md CHANGED
@@ -1,3 +1,177 @@
1
  ---
 
2
  license: cc-by-4.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ <<<<<<< HEAD
3
  license: cc-by-4.0
4
  ---
5
+ =======
6
+ annotations_creators:
7
+ - expert-generated
8
+ language_creators:
9
+ - expert-generated
10
+ - machine-generated
11
+ language:
12
+ - en
13
+ - ru
14
+ license:
15
+ - cc-by-4.0
16
+ multilinguality:
17
+ - monolingual
18
+ - multilingua;
19
+ size_categories:
20
+ - 1K<n<10K
21
+ source_datasets:
22
+ - spider
23
+ task_categories:
24
+ - text2text-generation
25
+ task_ids: []
26
+ pretty_name: Pauq
27
+ tags:
28
+ - text-to-sql
29
+ dataset_info:
30
+ features:
31
+ - name: db_id
32
+ dtype: string
33
+ - name: query
34
+ dtype: string
35
+ - name: question
36
+ dtype: string
37
+ - name: query_toks
38
+ sequence: string
39
+ - name: query_toks_no_value
40
+ sequence: string
41
+ - name: question_toks
42
+ sequence: string
43
+ config_name: pauq
44
+ splits:
45
+ - name: train
46
+ num_bytes:
47
+ num_examples:
48
+ - name: validation
49
+ num_bytes:
50
+ num_examples:
51
+ download_size:
52
+ dataset_size:
53
+ ---
54
+
55
+ # Dataset Card for Spider
56
+
57
+ ## Table of Contents
58
+ - [Dataset Description](#dataset-description)
59
+ - [Dataset Summary](#dataset-summary)
60
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
61
+ - [Languages](#languages)
62
+ - [Dataset Structure](#dataset-structure)
63
+ - [Data Instances](#data-instances)
64
+ - [Data Fields](#data-fields)
65
+ - [Data Splits](#data-splits)
66
+ - [Dataset Creation](#dataset-creation)
67
+ - [Curation Rationale](#curation-rationale)
68
+ - [Source Data](#source-data)
69
+ - [Annotations](#annotations)
70
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
71
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
72
+ - [Social Impact of Dataset](#social-impact-of-dataset)
73
+ - [Discussion of Biases](#discussion-of-biases)
74
+ - [Other Known Limitations](#other-known-limitations)
75
+ - [Additional Information](#additional-information)
76
+ - [Dataset Curators](#dataset-curators)
77
+ - [Licensing Information](#licensing-information)
78
+ - [Citation Information](#citation-information)
79
+ - [Contributions](#contributions)
80
+
81
+ ## Dataset Description
82
+
83
+ - **Homepage:**
84
+ - **Repository:**
85
+ - **Paper:**
86
+ - **Point of Contact:**
87
+
88
+ ### Dataset Summary
89
+
90
+
91
+ ### Supported Tasks and Leaderboards
92
+
93
+
94
+ ### Languages
95
+
96
+
97
+ ## Dataset Structure
98
+
99
+ ### Data Instances
100
+
101
+ **What do the instances that comprise the dataset represent?**
102
+
103
+ Each instance is natural language question and the equivalent SQL query
104
+
105
+ **How many instances are there in total?**
106
+
107
+ **What data does each instance consist of?**
108
+
109
+ [More Information Needed]
110
+
111
+ ### Data Fields
112
+
113
+ * **db_id**: Database name
114
+ * **question**: Natural language to interpret into SQL
115
+ * **query**: Target SQL query
116
+ * **query_toks**: List of tokens for the query
117
+ * **query_toks_no_value**: List of tokens for the query
118
+ * **question_toks**: List of tokens for the question
119
+
120
+ ### Data Splits
121
+
122
+ [More Information Needed]
123
+
124
+ ## Dataset Creation
125
+
126
+ ### Curation Rationale
127
+
128
+ [More Information Needed]
129
+
130
+ ### Source Data
131
+
132
+ #### Initial Data Collection and Normalization
133
+
134
+ #### Who are the source language producers?
135
+
136
+ [More Information Needed]
137
+
138
+ ### Annotations
139
+
140
+ #### Annotation process
141
+
142
+ #### Who are the annotators?
143
+
144
+ ### Personal and Sensitive Information
145
+
146
+ [More Information Needed]
147
+
148
+ ## Considerations for Using the Data
149
+
150
+ ### Social Impact of Dataset
151
+
152
+ ### Discussion of Biases
153
+
154
+ [More Information Needed]
155
+
156
+ ### Other Known Limitations
157
+
158
+ ## Additional Information
159
+
160
+ The listed authors in the homepage are maintaining/supporting the dataset.
161
+
162
+ ### Dataset Curators
163
+
164
+ [More Information Needed]
165
+
166
+ ### Licensing Information
167
+
168
+ The spider dataset is licensed under
169
+ the [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/legalcode)
170
+
171
+ [More Information Needed]
172
+
173
+ ### Citation Information
174
+
175
+
176
+ ### Contributions
177
+ >>>>>>> master
formatted_pauq.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:10a3c6adf4f65df322c855607fb1f10c5a94f3e5b441d4928d8f57288742efaf
3
+ size 308248709
pauq.py ADDED
@@ -0,0 +1,176 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """PAUQ: Text-to-SQL in Russian"""
16
+
17
+
18
+ import json
19
+ import os
20
+
21
+ import datasets
22
+
23
+
24
+ logger = datasets.logging.get_logger(__name__)
25
+
26
+
27
+ _CITATION = """\
28
+ @inproceedings{bakshandaeva-etal-2022-pauq,
29
+ title = "{PAUQ}: Text-to-{SQL} in {R}ussian",
30
+ author = "Bakshandaeva, Daria and
31
+ Somov, Oleg and
32
+ Dmitrieva, Ekaterina and
33
+ Davydova, Vera and
34
+ Tutubalina, Elena",
35
+ booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022",
36
+ month = dec,
37
+ year = "2022",
38
+ address = "Abu Dhabi, United Arab Emirates",
39
+ publisher = "Association for Computational Linguistics",
40
+ url = "https://aclanthology.org/2022.findings-emnlp.175",
41
+ """
42
+
43
+ _DESCRIPTION = """\
44
+ Pauq is a first Russian text-to-SQL dataset translated from original Spider dataset
45
+ with corrections and refinements of question, queries and databases.
46
+ """
47
+
48
+ _LICENSE = "CC BY-SA 4.0"
49
+
50
+ _URL = "https://huggingface.co/datasets/pauq/formatted_pauq.zip"
51
+
52
+
53
+ class Pauq(datasets.GeneratorBasedBuilder):
54
+ VERSION = datasets.Version("1.0.0")
55
+
56
+ BUILDER_CONFIGS = [
57
+ datasets.BuilderConfig(
58
+ name="ru_pauq_tl",
59
+ version=VERSION,
60
+ description=_DESCRIPTION,
61
+ ),
62
+ datasets.BuilderConfig(
63
+ name="en_pauq_tl",
64
+ version=VERSION,
65
+ description=_DESCRIPTION,
66
+ ),
67
+ datasets.BuilderConfig(
68
+ name="ru_pauq_iid",
69
+ version=VERSION,
70
+ description=_DESCRIPTION,
71
+ ),
72
+ datasets.BuilderConfig(
73
+ name="en_pauq_iid",
74
+ version=VERSION,
75
+ description=_DESCRIPTION,
76
+ ),
77
+ ]
78
+
79
+ def _info(self):
80
+ features = datasets.Features(
81
+ {
82
+ "id": datasets.Value("string"),
83
+ "db_id": datasets.Value("string"),
84
+ "source": datasets.Value("string"),
85
+ "type": datasets.Value("string"),
86
+ "question": datasets.Value("string"),
87
+ "query": datasets.Value("string"),
88
+ "sql": datasets.features.Sequence(datasets.Value("string")),
89
+ "question_toks": datasets.features.Sequence(datasets.Value("string")),
90
+ "query_toks": datasets.features.Sequence(datasets.Value("string")),
91
+ "query_toks_no_value": datasets.features.Sequence(datasets.Value("string")),
92
+ "masked_query": datasets.Value("string")
93
+ }
94
+ )
95
+ return datasets.DatasetInfo(
96
+ description=_DESCRIPTION,
97
+ features=features,
98
+ supervised_keys=None,
99
+ homepage=_HOMEPAGE,
100
+ license=_LICENSE,
101
+ citation=_CITATION,
102
+ )
103
+
104
+ def _split_generators(self, dl_manager):
105
+ downloaded_filepath = dl_manager.download_and_extract(_URL)
106
+
107
+ return [
108
+ datasets.SplitGenerator(
109
+ name=datasets.Split.TRAIN,
110
+ gen_kwargs={
111
+ "data_filepath": os.path.join(downloaded_filepath, "splits/ru_iid_train.json"),
112
+ },
113
+ ),
114
+ datasets.SplitGenerator(
115
+ name=datasets.Split.TEST,
116
+ gen_kwargs={
117
+ "data_filepath": os.path.join(downloaded_filepath, "splits/ru_iid_test.json"),
118
+ },
119
+ ),
120
+ datasets.SplitGenerator(
121
+ name=datasets.Split.TRAIN,
122
+ gen_kwargs={
123
+ "data_filepath": os.path.join(downloaded_filepath, "splits/ru_tl_train.json"),
124
+ },
125
+ ),
126
+ datasets.SplitGenerator(
127
+ name=datasets.Split.TEST,
128
+ gen_kwargs={
129
+ "data_filepath": os.path.join(downloaded_filepath, "splits/ru_tl_est.json"),
130
+ },
131
+ ),
132
+ datasets.SplitGenerator(
133
+ name=datasets.Split.TRAIN,
134
+ gen_kwargs={
135
+ "data_filepath": os.path.join(downloaded_filepath, "splits/en_iid_train.json"),
136
+ },
137
+ ),
138
+ datasets.SplitGenerator(
139
+ name=datasets.Split.TEST,
140
+ gen_kwargs={
141
+ "data_filepath": os.path.join(downloaded_filepath, "splits/en_iid_test.json"),
142
+ },
143
+ ),
144
+ datasets.SplitGenerator(
145
+ name=datasets.Split.TRAIN,
146
+ gen_kwargs={
147
+ "data_filepath": os.path.join(downloaded_filepath, "splits/en_tl_train.json"),
148
+ },
149
+ ),
150
+ datasets.SplitGenerator(
151
+ name=datasets.Split.TEST,
152
+ gen_kwargs={
153
+ "data_filepath": os.path.join(downloaded_filepath, "splits/en_tl_test.json"),
154
+ },
155
+ ),
156
+ ]
157
+
158
+ def _generate_examples(self, data_filepath):
159
+ """This function returns the examples in the raw (text) form."""
160
+ logger.info("generating examples from = %s", data_filepath)
161
+ with open(data_filepath, encoding="utf-8") as f:
162
+ pauq = json.load(f)
163
+ for idx, sample in enumerate(pauq):
164
+ yield idx, {
165
+ "id": sample["id"],
166
+ "db_id": sample["db_id"],
167
+ "source": sample["source"],
168
+ "type": sample["type"],
169
+ "query": sample["query"],
170
+ "sql": datasets.Value("string"),
171
+ "question": sample["question"],
172
+ "question_toks": sample["question_toks"],
173
+ "query_toks": sample["query_toks"],
174
+ "query_toks_no_value": sample["query_toks_no_value"],
175
+ "masked_query": sample["masked_query"]
176
+ }