AMSR / conferences_raw /akbc20 /AKBC.ws_2020_Conference_yuD2q50HWv.json
mfromm's picture
Upload 3539 files
fad35ef
raw
history blame contribute delete
No virus
12.1 kB
{"forum": "yuD2q50HWv", "submission_url": "https://openreview.net/forum?id=yuD2q50HWv", "submission_content": {"keywords": ["commonsense reasoning", "natural language generation", "dataset", "generate commonsense reasoning", "compositional generalization"], "TL;DR": "A new NLG dataset for generative commonsense reasoning. The input is a set of concepts, and the output is a natural sentence that describes an everyday scenario.", "authorids": ["AKBC.ws/2020/Conference/Paper75/Authors"], "title": "CommonGen: A Constrained Text Generation Challenge for Generative Commonsense Reasoning", "authors": ["Anonymous"], "pdf": "/pdf/fc1d19379fa40cab88149fbbdfd1ae0d1773efa3.pdf", "subject_areas": ["Relational AI"], "abstract": "Given a set of common concepts like {apple (noun), pick (verb), tree (noun)}, humans find it easy to write a sentence describing a grammatical and logically coherent scenario that covers these concepts, for example, {a boy picks an apple from a tree''}. The process of generating these sentences requires humans to use commonsense knowledge. We denote this ability as generative commonsense reasoning. Recent work in commonsense reasoning has focused mainly on discriminating the most plausible scenes from distractors via natural language understanding (NLU) settings such as multi-choice question answering. However, generative commonsense reasoning is a relatively unexplored research area, primarily due to the lack of a specialized benchmark dataset.\n\nIn this paper, we present a constrained natural language generation (NLG) dataset, named CommonGen, to explicitly challenge machines in generative commonsense reasoning. It consists of 30k concept-sets with human-written sentences as references. Crowd-workers were also asked to write the rationales (i.e. the commonsense facts) used for generating the sentences in the development and test sets. We conduct experiments on a variety of generation models with both automatic and human evaluation. Experimental results show that there is still a large gap between the current state-of-the-art pre-trained model, UniLM, and human performance.", "paperhash": "anonymous|commongen_a_constrained_text_generation_challenge_for_generative_commonsense_reasoning", "archival_status": "Non-Archival"}, "submission_cdate": 1581705814104, "submission_tcdate": 1581705814104, "submission_tmdate": 1588629592581, "submission_ddate": null, "review_id": ["wPVOdgi5XEB", "0J1JMfbVK-S", "QOvjL7fN-ZC"], "review_url": ["https://openreview.net/forum?id=yuD2q50HWv&noteId=wPVOdgi5XEB", "https://openreview.net/forum?id=yuD2q50HWv&noteId=0J1JMfbVK-S", "https://openreview.net/forum?id=yuD2q50HWv&noteId=QOvjL7fN-ZC"], "review_cdate": [1585320509886, 1585360296942, 1585521037023], "review_tcdate": [1585320509886, 1585360296942, 1585521037023], "review_tmdate": [1585695505474, 1585695505154, 1585695504879], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["AKBC.ws/2020/Conference/Paper75/AnonReviewer2"], ["AKBC.ws/2020/Conference/Paper75/AnonReviewer1"], ["AKBC.ws/2020/Conference/Paper75/AnonReviewer3"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["yuD2q50HWv", "yuD2q50HWv", "yuD2q50HWv"], "review_content": [{"title": "Interesting New Task/Dataset with minor (addressable) issues", "review": "Using the notion of a concept (word or words and usage), the authors create a dataset of concept sets (collections of concepts), rationales (free text descriptions of \u201ccommonsense\u201d linkings between concepts), and scenes (sentence(s) using these concepts together).\n\nThey select concept sets by sampling from existing image/video captions (attempting to control for certain distributional characteristics), generating human written sentences, and finally generating rationales or justifications of the concepts in context (both via AMT).\n\nThey propose a final task of constrained text generation from the concept set, of both the \u201cscene\u201d and the associated rationales, and several metrics.\n\nThe authors experiment with several neural models, ranging from RNN-based approaches, to transformer-based methods (pretrained and otherwise) for language generation.\n\nWhile I have some small concerns about the quality of the dataset construction and the presentation in the paper, I think this will be an asset to the community.\n\nQuality:\nThe paper itself is relatively well written, misses some related work, and has some issues mentioned in the clarity section.\n\nMy biggest concern is a lack of detail about explicit quality control about the annotation process. Any training, instructions, or other measures made to ensure dataset quality should be documented, if not in the main paper, then at least the appendix.\n\nIn terms of missing related work, recently the T5 model (Raffel et al., 2019) and CTRL model (Keskar et al., 2019) have been used for controlled text generation. \n\nThe paper speculates but never shows that certain concepts would be over-represented if it sampled from candidate data.\n\nClarity:\nOverall the paper is relatively clear. I think it would be clearer if the paper did not interchangeably use the words \u201cscene\u201d and \u201csentence\u201d.\nTable 1 with dataset statistics is also missing rationale counts, which may be of interest to the community at large.\n\nThe notation used in Section 3.1 about the sampling weights should be revisited - it appears that the \u201csecond\u201d and \u201cthird\u201d terms were reversed. The discussion about the word-weighted, inverse-concept-connectedness-weighted sampling scheme would probably be clearer if phrased explicitly in the context of hypergraphs, where each hyper-edge is a concept set.\n\nSignificance:\nI believe that this dataset is likely to be a large benefit to the community. I have some reservations about the quality of AMT annotations, particularly when no quality control measures are described.\n\nPros:\nFormalize an interesting and challenging task\nCollect what appears to be a large dataset for this task\nAttempt to account for distributional characteristics of the data.\nUse a variety of methods, including well-performing model methods\nThere are unseen elements in the training/dev/test sets.\nThe paper measures some distributional characteristics of the dataset, such as how close the concepts in it are\nSeveral reasonable metrics are proposed. As this is a new task, a variety can be important until the community settles on the best available measures.\n\nCons:\nNo documentation of annotator training or attempts at Quality Assurance/Control are made. Some sort of material to this effect would strengthen the contribution, as the major contribution is the dataset.\nClarity issues when presenting methods\nI am not convinced that the distributional issues the authors believe exists when sampling from real data would manifest.\nNo measurement of dataset distribution vs. real world distribution.\n", "rating": "7: Good paper, accept", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "Official Blind Review #1", "review": "The paper proposes a new task of commonsense generation: given a set of concepts, e.g., (cat, eat, outdoors, apple), write a sentence with commonsense phenomena. While the evaluation set is collected through crowdsourcing, the training set comes from existing captioning dataset. The paper proposes sequence to sequence models based on transformer as baselines. The paper is clearly written and experiments are reasonable.\n\nI'm a bit uncomfortable calling image captions as sentences describing common sense. Isn't sentences from news articles or Wikipedia also contain common sense? I wished the task is motivated better -- what can we say about models that do this task well? Can we use this dataset to show improvements on tasks that are more realistic, such as MT, semantic parsing, classification, or QA? \n\nI don't see much value in adding this new \"PivotBert\" score. Does this correlate with human scores better? Is it more interpretable or intuitive? It seems to rank systems similarly to other measures such as BLEU from Table 2. \n\nQuestions/Comments:\n\nIn Section 3.1, why do you run a part-of-speech tags? Do you limit the concepts to be only nouns and verbs? \n\nIn Section 4, be more specific about UniLM would be helpful. On a related note, how does it compare to T5?\n\nIn 5.3, how many pairs each annotator annotated? The set up should be more clearly described. \n\nIn the introduction, it will better to clearly mention how many human references are collected through crowdsourcing and how many are coming from the existing captioning datasets. \n\nDoes the performance on unseen concept set worse than concept sets that also exist in training set?\n\nI didn't find Figure 2 particularly useful.\n", "rating": "6: Marginally above acceptance threshold", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}, {"title": "Interesting new task, thorough baseline experiments and evaluations", "review": "# Summary\n\nThis paper introduces a new generative commonsense benchmark, CommonGen, in which a system is given a set of a noun or verb words and is expected to generate a simple sentence that describes a commonsense in our daily life. One unique challenge in this task is that it requires relational reasoning with commonsense. Spatial knowledge, object property, human behavior or social conventions, temporal knowledge and general commonsense are dominant relationship types in CommonGen. The dataset is created by carefully collecting concepts, captions, and human annotations via Amazon Mechanical Turk. They experiment several baselines including state-of-the-art UniLM and evaluate the model's performance using a variety of evaluation metrics as well as human evaluation. The experimental results show that even state-of-the-art methods are largely behind human performance. \n\n# Pros\n- Introduce a new large-scale generative common sense benchmark.\n- Thorough baseline experiments using state-of-the-art generation models and evaluation using a variety of automatic evaluations and human evaluations. \n\n# Cons\nI don't have major concerns. Adding more qualitative analysis or showing a pair of input concepts and output (human-annotated) sentences in Section 3.3 or somewhere would help readers to get a better sense of the task. Also, Table 5 should be moved to main page, rather than keeping it in the Appendix, as you discuss the results using a whole subsection on the main page. \n\n\n", "rating": "7: Good paper, accept", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}], "comment_id": [], "comment_cdate": [], "comment_tcdate": [], "comment_tmdate": [], "comment_readers": [], "comment_writers": [], "comment_reply_content": [], "comment_content": [], "comment_replyto": [], "comment_url": [], "meta_review_cdate": 1588299713910, "meta_review_tcdate": 1588299713910, "meta_review_tmdate": 1588341537561, "meta_review_ddate ": null, "meta_review_title": "Paper Decision", "meta_review_metareview": "This paper introduces a constrained text generation challenge dataset called \"CommonGen\" in which the idea is build models that accept concepts (nouns and verbs) and then generates plausible sentences conditioned on these. The idea is that doing this successfully requires some sort of \"common sense\" facts and reasoning. While there are some concerns about just how much \"common sense\" is necessarily required for the task, and also the quality assurance processes put in place during data collection, this corpus nonetheless seems like an interesting new resource for the community.", "meta_review_readers": ["everyone"], "meta_review_writers": ["AKBC.ws/2020/Conference/Program_Chairs"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=yuD2q50HWv&noteId=WJo-k4wfWjd"], "decision": "Accept"}