AMSR / conferences_raw /akbc20 /AKBC.ws_2020_Conference_5c_ZmAdVfI.json
mfromm's picture
Upload 3539 files
fad35ef
raw
history blame contribute delete
No virus
15.3 kB
{"forum": "5c_ZmAdVfI", "submission_url": "https://openreview.net/forum?id=5c_ZmAdVfI", "submission_content": {"keywords": ["Semantic Parsing", "NLIDB", "WikiSQL", "Question Answering", "SQL", "Information Retrieval"], "TL;DR": "Syntactic Question Abstraction and Retrieval for Data-Scarce Semantic Parsing", "authorids": ["AKBC.ws/2020/Conference/Paper30/Authors"], "title": "Syntactic Question Abstraction and Retrieval for Data-Scarce Semantic Parsing", "authors": ["Anonymous"], "pdf": "/pdf/51c7819a764baf76d9c7cac220c7defd01757209.pdf", "subject_areas": ["QuestionAnswering and Reasoning", "Machine Learning"], "abstract": "Deep learning approaches to semantic parsing require a large amount of labeled data, but annotating complex logical forms is costly. Here, we propose SYNTACTIC QUESTION ABSTRACTION & RETRIEVAL (SQAR), a method to build a neural semantic parser that translates a natural language (NL) query to a SQL logical form (LF) with less than 1,000 annotated examples. SQAR first retrieves a logical pattern from the train data by computing the similarity between NL queries and then grounds a lexical information on the retrieved pattern in order to generate the final LF. We validate SQAR by training models using various small subsets of WikiSQL train data achieving up to 4.9% higher LF accuracy compared to the previous state-of-the-art models on WikiSQL test set. We also show that by using query-similarity to retrieve logical pattern, SQAR can leverage a paraphrasing dataset achieving up to 5.9% higher LF accuracy compared to the case where SQAR is trained by using only WikiSQL data. In contrast to a simple pattern classification approach, SQAR can generate unseen logical patterns upon the addition of new examples without re-training the model. We also discuss an ideal way to create cost efficient and robust train datasets when the data distribution can be approximated under a data-hungry setting.", "paperhash": "anonymous|syntactic_question_abstraction_and_retrieval_for_datascarce_semantic_parsing", "archival_status": "Archival"}, "submission_cdate": 1581705796172, "submission_tcdate": 1581705796172, "submission_tmdate": 1588628486147, "submission_ddate": null, "review_id": ["onpm7KLjqnc", "9BbNFGD-Pz", "-26jVErmWR7"], "review_url": ["https://openreview.net/forum?id=5c_ZmAdVfI&noteId=onpm7KLjqnc", "https://openreview.net/forum?id=5c_ZmAdVfI&noteId=9BbNFGD-Pz", "https://openreview.net/forum?id=5c_ZmAdVfI&noteId=-26jVErmWR7"], "review_cdate": [1585315558631, 1585334273694, 1585531559465], "review_tcdate": [1585315558631, 1585334273694, 1585531559465], "review_tmdate": [1585695539491, 1585695539230, 1585695538953], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["AKBC.ws/2020/Conference/Paper30/AnonReviewer2"], ["AKBC.ws/2020/Conference/Paper30/AnonReviewer1"], ["AKBC.ws/2020/Conference/Paper30/AnonReviewer3"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["5c_ZmAdVfI", "5c_ZmAdVfI", "5c_ZmAdVfI"], "review_content": [{"title": "Retrieval-Based Data-Scarce Semantic Parsing", "review": "The paper presents a retrieval-based semantic parsing method for the data-scarce setting. The model retrieves a logical pattern from the train data by computing the similarity between NL queries. Then lexicons are added to the retrieved pattern in order to generate the final LF. The motivation and the proposed method make sense to me. The experimental results also show that the approach improves the performance over several baseline methods. The only concern is that the experiments are only conducted on wikisql, which is not very data-hungry. The paper reduces the number of training set to simulate the setup. The paper could be improved by conducting experiments on more small-scale datasets. ", "rating": "8: Top 50% of accepted papers, clear accept", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}, {"title": "Solid Paper on low data semantic parsing with deep analysis", "review": "Summary: This paper introduces a modeling technique for Text-to-SQL semantic parsing designed to work well in data sparse regimes. The model works by first retrieving the most similar questions from the training set. The SQL logical form of retrieved questions are then \"ungrounded\" and the most common retrieved logical form pattern is then\n fed into a grounding network which insert entities from the question into the pattern to form the final parsed logical form.\n\nStrengths:\n- The authors demonstrate improved performance over a SQLova baseline in the small data regime and comparable performance when more data is available\n- The authors perform thoughtful \"generalization tests\" to investigate potential weaknesses or behaviors of their model (the dependence on logical pattern distribution, dependence on the dataset size and generalizing to unseen forms)\n- The separation of syntactic and lexical parts of the parsing process is interesting and sensible.\n- Their method is able to leverage additional question similarity resources which allow their model's performance to improve without requiring extra expensive parsing annotation.\n\nWeaknesses:\n- The parsing method is not compositional, harming its generalizability. The model can never generalize to patterns not seen in its database of patterns, but there may well be training signals in the dataset that would allow for this kind of behavior. \n- Performance gains, whilst certainly present, are relatively modest over SQLova in most settings.\n- The authors demonstrate that the model can generalize to patterns not seen at training time by adding extra data to the model's database at test time. Whilst this boosts performance, it seems the performance is worse than simply training the model again with the extra data. \n", "rating": "6: Marginally above acceptance threshold", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}, {"title": "Good focused contribution showing the efficacy of retrieval based models for low resource semantic parsing", "review": "This paper describes a retrieval-based model which uses query-to-query similarity for the WikiSQL semantic parsing task. The method especially does well when labeled data is scarce.\n\nThe approach is simple yet effective. The paper is very well written, the experiments are incisive and clearly demonstrate the usefulness of the approach.\n\nCan we also compare this model with other supervised approaches such as Berant and Liang, and especially Finegan-Dollak et al.. This comparison will help the readers understand the value of the query similarity based non-parametric approach.\n\nIt would also be interesting to see how well this model does/fails when the queries become more and more complex and compositional. Current approach seems to work only for specific kinds of queries.\n\nThe syntactic information of the queries $q$ being used in the retriever module and the lexical representation $g$ being used in the grounder are obtained by slicing the encoding of the CLS token in the table aware BERT encoder. This is strange. What guarantees that the syntactic and semantic representation can be disentangled in this way. Please see this paper for a better way to do this: https://www.aclweb.org/anthology/N19-1254/\n\nA small description of SQLOVA would help.\n\ntypo: 141: environments", "rating": "7: Good paper, accept", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}], "comment_id": ["XnmNXDgGjz", "MeCQwq-tk4H", "ZAJhae45OOs", "SUmW0orkXYs"], "comment_cdate": [1586520904724, 1586517839593, 1586517430757, 1586515535888], "comment_tcdate": [1586520904724, 1586517839593, 1586517430757, 1586515535888], "comment_tmdate": [1586520904724, 1586517839593, 1586517430757, 1586515535888], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["AKBC.ws/2020/Conference/Paper30/Authors", "AKBC.ws/2020/Conference"], ["AKBC.ws/2020/Conference/Paper30/Authors", "AKBC.ws/2020/Conference"], ["AKBC.ws/2020/Conference/Paper30/Authors", "AKBC.ws/2020/Conference"], ["AKBC.ws/2020/Conference/Paper30/Authors", "AKBC.ws/2020/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "We thank all the reviewers for acknowledging the value of our approach under a data scarce environment and the solid generalization test. ", "comment": "We thank all the reviewers for acknowledging the value of our approach under a data scarce environment and the solid generalization test. We also deeply appreciate productive comments from all reviewers giving insights about how SQAR can be improved further. In the future work, we will extend our work over compositional queries using various text-to-SQL tasks and improve SQAR to have better performance and generalization ability."}, {"title": "We thank the reviewer for critical reading of our manuscript indicating the strength and weakness keenly.", "comment": "We thank the reviewer for critical reading of our manuscript indicating the strength and weakness keenly. SQAR focuses on generating SQL queries with logical patterns \"observed during training\" with high accuracy under data scarce condition via retrieval approach that may potentially sacrifice generalization performance for unseen logical patterns. However, we found SQAR can also handle the query with unseen logical pattern during training at the inference step by including new examples to the dataset without re-training the model. Making SQAR use newly included examples more efficiently and be compositional boosting its performance further remains as our future work."}, {"title": "We thank the reviewer for acknowledging the value of our work and giving productive comments.", "comment": "We thank the reviewer for acknowledging the value of our work and giving productive comments. We reply to individual comments as below.\n\n[About comparison with other models]\nFinegan-Dollak et al. develop neural semantic parser that generates SQL query for given input query by selecting a logical pattern and corresponding entities from the utterance via LSTM classifier. Although the approach is similar to SQAR, retrieving logical pattern and grounding, the use of query similarity during retrieval process in SQAR has following merits: (1) by retrieving logical pattern via similarity in natural language space, paraphrasing datasets can be employed which is relatively easy to construct compared to semantic parsing datasets. Also, other non-SQL semantic parsing datasets can be employed to train SQAR. (2) SQAR can parse unseen logical patterns during training by adding new examples without re-training.\n\nBerant and Liang developed the model that first generates candidate logical forms and corresponding canonical utterances for given input utterance. The most similar utterance with respect to the input utterance among the generated ones is selected and corresponding logical form is used as the model output. Although SQAR also uses the similarity in natural language space, it circumvents the burden of generating candidate logical form and canonical utterance by directly retrieving them from examples.\nWe will update the \"Related works\" section of our manuscript accordingly.\n\n\n[More complex queries]\nWikiSQL was selected as our first test bed as it provides the large volume of examples facilitating the measurement of scale-dependent behaviour of SQAR and it consists of relatively simple queries. We will extend the task to parse more complex queries in the future.\n\n\n[About separation of syntactic (q-vector) and semantic (g-vector) information]\nWe thank the reviewer for providing valuable reference. During training, SQAR employs two losses: (1) the loss from the retrieval process estimated by Euclidean distance between q-vectors, and (2) the loss from the grounder. q-vectors should remove semantic information to minimize the loss as the questions with same syntactic information should map to the identical vector. On the other hand, g-vector should include semantic information to properly ground logical patterns although there is no guarantee that syntactic information is completely removed from it. The additional loss like word ordering suggested in Chen et al., may be employed in the future study to improve the separation process.\n\n\n[A brief description of SQLova]\nSQLova is a neural semantic parser that generates SQL queries. First, SQLova encodes question and table headers using table-aware BERT encoder. Next, it generates SQL queries via slot-filling approach by classifying individual components: aggregation operator and corresponding columns in select clause, and the number of conditions, column and corresponding operator and value in where clause.\nWe will update the \"Related works\" section of our manuscript accordingly.\n"}, {"title": "We appreciate the reviewer for acknowledging the value of our work and providing constructive comments.", "comment": "We appreciate the reviewer for acknowledging the value of our work and providing constructive comments. WikiSQL was selected as our first test bed as it provides the large volume of examples allowing us to analyze the model performance in different data scales. We will also employ other (small size) semantic parsing datasets to further solidify and develop retrieval based parsing approach under data-scarce environments in the future."}], "comment_replyto": ["5c_ZmAdVfI", "9BbNFGD-Pz", "-26jVErmWR7", "onpm7KLjqnc"], "comment_url": ["https://openreview.net/forum?id=5c_ZmAdVfI&noteId=XnmNXDgGjz", "https://openreview.net/forum?id=5c_ZmAdVfI&noteId=MeCQwq-tk4H", "https://openreview.net/forum?id=5c_ZmAdVfI&noteId=ZAJhae45OOs", "https://openreview.net/forum?id=5c_ZmAdVfI&noteId=SUmW0orkXYs"], "meta_review_cdate": 1588300564502, "meta_review_tcdate": 1588300564502, "meta_review_tmdate": 1588341534878, "meta_review_ddate ": null, "meta_review_title": "Paper Decision", "meta_review_metareview": "This paper proposed a simple and effective retrieval-based approach for text-to-SQL semantic parsing for the data-scarce setting. The approach has been evaluated on the WikiSQL dataset and demonstrates gains over the previous best model SQLOVA when a small number of training examples were used. It also demonstrates a zero-shot ability to handle unseen logical patterns.\n\nAll the reviewers agreed that this paper is well-written and the approach is effective and well-justified in the experiments. Therefore, we recommend the acceptance of this paper.\n\nA major concern raised among the reviewers is whether this approach can be extended to other truly small semantic parsing datasets and more compositional logic forms. This is worth exploring and can leave it to future work. ", "meta_review_readers": ["everyone"], "meta_review_writers": ["AKBC.ws/2020/Conference/Program_Chairs"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=5c_ZmAdVfI&noteId=dY2igVWsGYv"], "decision": "Accept"}