File size: 3,248 Bytes
a7ded55
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
675b7b3
 
a7ded55
 
 
8997e0d
91f2588
8997e0d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
91b6989
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8997e0d
a7ded55
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
---
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: validation
    path: data/validation-*
dataset_info:
  features:
  - name: query
    dtype: string
  - name: question
    dtype: string
  - name: table_names
    sequence: string
  - name: tables
    sequence: string
  - name: answer
    dtype: string
  - name: source
    dtype: string
  - name: target
    dtype: string
  splits:
  - name: train
    num_bytes: 2203191673
    num_examples: 6715
  - name: validation
    num_bytes: 434370435
    num_examples: 985
  download_size: 535322409
  dataset_size: 2637562108
task_categories:
- table-question-answering
---
# Dataset Card for "spider-tableQA"

# Usage
```python
import pandas as pd
from datasets import load_dataset

spider_tableQA = load_dataset("vaishali/spider-tableQA")

for sample in spider_tableQA['train']:
  question = sample['question']
  sql_query = sample['query']
  input_table_names = sample["table_names"]
  input_tables = [pd.read_json(table, orient='split') for table in sample['tables']]
  answer = pd.read_json(sample['answer'], orient='split')

  # flattened input/output
  input_to_model = sample["source"]
  target = sample["target"]
```
# BibTeX entry and citation info
```
@inproceedings{pal-etal-2023-multitabqa,
    title = "{M}ulti{T}ab{QA}: Generating Tabular Answers for Multi-Table Question Answering",
    author = "Pal, Vaishali  and
      Yates, Andrew  and
      Kanoulas, Evangelos  and
      de Rijke, Maarten",
    booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
    month = jul,
    year = "2023",
    address = "Toronto, Canada",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2023.acl-long.348",
    doi = "10.18653/v1/2023.acl-long.348",
    pages = "6322--6334",
    abstract = "Recent advances in tabular question answering (QA) with large language models are constrained in their coverage and only answer questions over a single table. However, real-world queries are complex in nature, often over multiple tables in a relational database or web page. Single table questions do not involve common table operations such as set operations, Cartesian products (joins), or nested queries. Furthermore, multi-table operations often result in a tabular output, which necessitates table generation capabilities of tabular QA models. To fill this gap, we propose a new task of answering questions over multiple tables. Our model, MultiTabQA, not only answers questions over multiple tables, but also generalizes to generate tabular answers. To enable effective training, we build a pre-training dataset comprising of 132,645 SQL queries and tabular answers. Further, we evaluate the generated tables by introducing table-specific metrics of varying strictness assessing various levels of granularity of the table structure. MultiTabQA outperforms state-of-the-art single table QA models adapted to a multi-table QA setting by finetuning on three datasets: Spider, Atis and GeoQuery.",
}
```

[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)