entslscheia
commited on
Commit
•
d3f7c56
1
Parent(s):
1f8776b
Update README.md
Browse files
README.md
CHANGED
@@ -8,13 +8,74 @@ size_categories:
|
|
8 |
- n<1K
|
9 |
---
|
10 |
|
|
|
11 |
|
12 |
In traditional knowledge base question answering (KBQA) methods, semantic parsing plays a crucial role. It requires requires a semantic parser to be extensively trained on a vast dataset of labeled examples, typically consisting of question-answer or question-program pairs. This necessity arises primarily because smaller models before the advent of large language models (LLMs) were data-hungry, needing extensive data to effectively master tasks. Additionally, these methods often relied on the assumption that data is independent and identically distributed (i.i.d.), meaning the questions a model could answer needed to match the distribution of the training data. This demanded training data to cover a broad spectrum of the KB's entities and relationships for the model to understand it adequately.
|
13 |
|
14 |
However, the rise of LLMs has shifted this paradigm. LLMs excel in learning from few (or even zero) in-context examples. They utilize natural language as a general vehicle of thought, enabling them to actively navigate and interact with KBs using auxiliary tools, without the need for training on comprehensive datasets. This advance suggests LLMs can sidestep the earlier limitations and eliminate the dependency on extensive, high-coverage training data.
|
15 |
|
16 |
-
Such a paradigm is usually encapsulated in the term "language agent" or "LLM agent.
|
17 |
|
18 |
As a result, we curate KBQA-Agent to offer a more targeted KBQA evaluation for language agents. KBQA-Agent contains 500 complex questions over Freebase from three existing KBQA datasets: GrailQA, ComplexWebQuestions, and GraphQuestions. To further support future research, we also provide the ground truth action sequence (i.e., tool invocations) for the language agent to take to answer each question.
|
19 |
|
20 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
- n<1K
|
9 |
---
|
10 |
|
11 |
+
**Introduction**
|
12 |
|
13 |
In traditional knowledge base question answering (KBQA) methods, semantic parsing plays a crucial role. It requires requires a semantic parser to be extensively trained on a vast dataset of labeled examples, typically consisting of question-answer or question-program pairs. This necessity arises primarily because smaller models before the advent of large language models (LLMs) were data-hungry, needing extensive data to effectively master tasks. Additionally, these methods often relied on the assumption that data is independent and identically distributed (i.i.d.), meaning the questions a model could answer needed to match the distribution of the training data. This demanded training data to cover a broad spectrum of the KB's entities and relationships for the model to understand it adequately.
|
14 |
|
15 |
However, the rise of LLMs has shifted this paradigm. LLMs excel in learning from few (or even zero) in-context examples. They utilize natural language as a general vehicle of thought, enabling them to actively navigate and interact with KBs using auxiliary tools, without the need for training on comprehensive datasets. This advance suggests LLMs can sidestep the earlier limitations and eliminate the dependency on extensive, high-coverage training data.
|
16 |
|
17 |
+
Such a paradigm is usually encapsulated in the term "language agent" or "LLM agent". Existing KBQA datasets may not be ideal to evaluate this new paradigm for two reasons: 1) Many questions are single-hop queries over the KB, which fails to sufficiently challenge the capabilities of LLMs, and 2) Established KBQA benchmarks contain tens of thousands of test questions. Evaluating the most capable models like GPT-4 on so many questions would be extremely costly and often unnecessary.
|
18 |
|
19 |
As a result, we curate KBQA-Agent to offer a more targeted KBQA evaluation for language agents. KBQA-Agent contains 500 complex questions over Freebase from three existing KBQA datasets: GrailQA, ComplexWebQuestions, and GraphQuestions. To further support future research, we also provide the ground truth action sequence (i.e., tool invocations) for the language agent to take to answer each question.
|
20 |
|
21 |
+
|
22 |
+
**Split**
|
23 |
+
|
24 |
+
KBQA-Agent targets a training-free setting (we used a one-shot demo in our original experiments), so there is only one split of the test set.
|
25 |
+
|
26 |
+
|
27 |
+
**Dataset Structure**
|
28 |
+
- **qid:** The unique id of a question
|
29 |
+
- **s-expression:** The ground truth logical form, where we derive the ground truth actions from
|
30 |
+
- **answer:** The list of answer entities
|
31 |
+
- **question:** The input question
|
32 |
+
- **actions:** The ground truth sequence of actions, derived from the s-expression
|
33 |
+
- **entities:** The topic entities mentioned in the question
|
34 |
+
- **source:** The source of the question (e.g., GrailQA)
|
35 |
+
|
36 |
+
|
37 |
+
**Citation**
|
38 |
+
|
39 |
+
If our paper or related resources prove valuable to your research, we kindly ask for citation. Please feel free to contact us with any inquiries.
|
40 |
+
|
41 |
+
```
|
42 |
+
@article{Gu2024Middleware,
|
43 |
+
author = {Yu Gu, Yiheng Shu, Hao Yu, Xiao Liu, Yuxiao Dong, Jie Tang, Jayanth Srinivasa, Hugo Latapie, Yu Su},
|
44 |
+
title = {Middleware for LLMs: Tools Are Instrumental for Language Agents in Complex Environments},
|
45 |
+
journal = {arXiv preprint arXiv: 2402.14672},
|
46 |
+
year = {2024}
|
47 |
+
}
|
48 |
+
```
|
49 |
+
Please also cite original sources of KBQA-Agent:
|
50 |
+
|
51 |
+
**GrailQA:**
|
52 |
+
```
|
53 |
+
@inproceedings{grailqa,
|
54 |
+
author = {Yu Gu, Sue Kase, Michelle Vanni, Brian M. Sadler, Percy Liang, Xifeng Yan, Yu Su},
|
55 |
+
title = {Beyond {I.I.D.:} Three Levels of Generalization for Question Answering on Knowledge Bases},
|
56 |
+
booktitle = {WWW '21: The Web Conference 2021, Virtual Event / Ljubljana, Slovenia, April 19-23, 2021},
|
57 |
+
year = {2021}
|
58 |
+
}
|
59 |
+
```
|
60 |
+
**ComplexWebQ:**
|
61 |
+
```
|
62 |
+
@inproceedings{cwq,
|
63 |
+
author = {Alon Talmor, Jonathan Berant},
|
64 |
+
title = {The Web as a Knowledge-Base for Answering Complex Questions},
|
65 |
+
booktitle = {Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers)},
|
66 |
+
year = {2018}
|
67 |
+
}
|
68 |
+
|
69 |
+
```
|
70 |
+
**GraphQuestions:**
|
71 |
+
```
|
72 |
+
@inproceedings{graphq,
|
73 |
+
author = {Yu Su, Huan Sun, Brian M. Sadler, Mudhakar Srivatsa, Izzeddin Gur, Zenghui Yan, Xifeng Yan},
|
74 |
+
title = {On Generating Characteristic-rich Question Sets for QA Evaluation},
|
75 |
+
booktitle = {Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016},
|
76 |
+
year = {2016}
|
77 |
+
}
|
78 |
+
|
79 |
+
```
|
80 |
+
|
81 |
+
|