AMSR / conferences_raw /akbc20 /AKBC.ws_2020_Conference_025X0zPfn.json
mfromm's picture
Upload 3539 files
fad35ef
raw
history blame contribute delete
No virus
10.8 kB
{"forum": "025X0zPfn", "submission_url": "https://openreview.net/forum?id=025X0zPfn", "submission_content": {"keywords": [], "authorids": ["AKBC.ws/2020/Conference/Paper21/Authors"], "title": "How Context Affects Language Models' Factual Predictions", "authors": ["Anonymous"], "pdf": "/pdf/fb392d660e14778e46732d4a9e11832b2c2235db.pdf", "subject_areas": ["QuestionAnswering and Reasoning"], "abstract": "When pre-trained on large unsupervised textual corpora, language models are able to store and retrieve factual knowledge to some extent, making it possible to use them directly for zero-shot cloze-style question answering. However, storing factual knowledge in a fixed number of weights of a language model clearly has limitations. Previous approaches have successfully provided access to information outside the model weights using supervised architectures that combine an information retrieval system with a machine reading component. In this paper, we go one step further and integrate information from a retrieval system with a pre-trained language model in a purely unsupervised way. We report that augmenting pre-trained language models in this way dramatically improves performance and that it is competitive with a supervised machine reading baseline without requiring any supervised training. Furthermore, processing query and context with different segment tokens allows BERT to utilize its Next Sentence Prediction pre-trained classifier to determine whether the context is relevant or not, substantially improving BERT's zero-shot cloze-style question-answering performance and making its predictions robust to noisy contexts.", "paperhash": "anonymous|how_context_affects_language_models_factual_predictions", "archival_status": "Archival"}, "submission_cdate": 1581705792339, "submission_tcdate": 1581705792339, "submission_tmdate": 1588686583264, "submission_ddate": null, "review_id": ["8M0VEr58c", "ZdCIZyh4C9u", "uPkVYkBQqjs"], "review_url": ["https://openreview.net/forum?id=025X0zPfn&noteId=8M0VEr58c", "https://openreview.net/forum?id=025X0zPfn&noteId=ZdCIZyh4C9u", "https://openreview.net/forum?id=025X0zPfn&noteId=uPkVYkBQqjs"], "review_cdate": [1583776994578, 1585254711408, 1585335930155], "review_tcdate": [1583776994578, 1585254711408, 1585335930155], "review_tmdate": [1585695548961, 1585695548689, 1585695548404], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["AKBC.ws/2020/Conference/Paper21/AnonReviewer3"], ["AKBC.ws/2020/Conference/Paper21/AnonReviewer1"], ["AKBC.ws/2020/Conference/Paper21/AnonReviewer2"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["025X0zPfn", "025X0zPfn", "025X0zPfn"], "review_content": [{"title": "Official Blind Review #3", "review": "This work analyses how factual predictions of a Masked Language Model (MLM) such as BERT and RoBERTa are influenced by adding extra context to a query. The paper examines a variety of ways of constructing this context, spanning over settings such as adversarially constructed, generated by a language model, retrieved by supervisedly trained systems, a TF-IDF retrieved baseline and an oracle. The paper finds that enriching a query with a good context can substantially improve performance in the LAMA probe, that analyses factual predictions. Additionally, the results demonstrate that there is considerable headroom for improvement in the retrieval side, evidenced by the results using an oracle retriever. Moreover, the paper shows the importance of BERT's Next Sentence Prediciton task, showing that it makes the model robust to adversarial appended contexts.\n\nOverall, the paper is well written and the results are relevant to the community. As argued, completely relying on model's parameters for storing factual knowledge has a series of disadvantages compared to models able to retrieve relevant factual information from a corpus. This is especially relevant when this is done in an unsupervised manner, as it allows proper scaling. The experiments show clear evidence to support the claim that augmenting a query with a proper context greatly enhances performance on a factual knowledge probe. One strong point of this paper is the comparison with multiple strategies for generating contexts.\n\nThe paper claims to differ from previous work by considering a fully unsupervised setting. While it is true that no extra supervision is needed for the B-RET experiments, the exact same point holds for other work such as REALM (Guu et al, 2020), which the paper mentions. REALM is unsupervisedly pre-trained (including the retrieval portion). It would also be nice to see quantitative comparisons with the contexts retrieved by this model, though it's understandable that the authors don't report this, given how recent this work is and that it is not open-source at the time of writing. \n\n\n\nTypos & other minor comments:\nSection 2, Language Models and Probes: It's a bit of a stretch to call modells like T5 a \"variant\" of BERT.\nSection 2, Open-Domain QA: \"areas\" - > area\n", "rating": "7: Good paper, accept", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "Review", "review": "This paper shows unsupervised performance on LAMA when using various methods to obtain contexts. It is very related to the recent REALM work (which was posted a few days before this submission); both show that transformers perform quite well when given related, retrieved context. This paper does it in a fully unsupervised way, however, and includes some really interesting analysis. I really liked all of the ways the models were probed, including using a generative model to provide context. This at first seemed odd to me, but the authors provide a good justification for why this is an interesting probe in section 4.2.\n\nThe authors themselves noted the limitations of the work in the paper (e.g., single tokens vs. longer answers, mentioned on page 10), so there is little for me to mention as problematic. My one minor quibble is with the \"unsupervised question answering\" section on page 10. In the first sentence of section 6, the authors are careful to state that they are talking about \"factual unsupervised cloze QA\", but there is no such hedging in the unsupervised QA section just above. There really is only evidence here for simple, factual, predicate-argument structure style questions, and using blanket, unqualified terms like \"question answering\" feels like over-claiming.\n\nThis review seems very short to me; mostly I write notes about things that aren't clear, or that could be improved, or aren't true. I didn't really have anything to write about this paper. The review is short because the paper is excellent, and I learned a lot from it.", "rating": "9: Top 15% of accepted papers, strong accept", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "Great insights!", "review": "The paper explores how the performance of BERT and DrQA changes as a result of being applied to different text snippets. The paper compares retrieved snippets, generated snippets (NLG), adversarial snippets (answers to different questions), as well as an oracle (using the correct snippet of the extraction from Wikipedia).\n\nThis is a great paper that provides a lot of insights into how the quality of the underlying content affects the prediction quality. \n\nI have very little to complain. I would have appreciated but some significance analysis on the results. I want to point out that TF-IDF is a very weak retrieval model, but I understand that this is not the focus of this paper.", "rating": "9: Top 15% of accepted papers, strong accept", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}], "comment_id": ["0d84AHrbwkE", "QK8VU8TIVd_", "9VkTi-sncvJ"], "comment_cdate": [1586450761809, 1586450723056, 1586450672263], "comment_tcdate": [1586450761809, 1586450723056, 1586450672263], "comment_tmdate": [1586450761809, 1586450723056, 1586450672263], "comment_readers": [["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["AKBC.ws/2020/Conference/Paper21/Authors", "AKBC.ws/2020/Conference"], ["AKBC.ws/2020/Conference/Paper21/Authors", "AKBC.ws/2020/Conference"], ["AKBC.ws/2020/Conference/Paper21/Authors", "AKBC.ws/2020/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Thanks!", "comment": "We thank the reviewer for their feedback. REALM is indeed pre-trained in an unsupervised/self-supervised way, and we are looking forward to compare to it directly once we have access to the code. The reviewer mentions \u201cother work such as REALM\u201d--we would also love to compare against other work in this list, and if you do have references to more such work we love to learn about them. "}, {"title": "Thanks!", "comment": "We thank the reviewer for their feedback. We do agree our unsupervised QA section also only focuses on simple, factual, predicate-argument structure style questions. We will revise this in the final version of the paper. "}, {"title": "Thanks!", "comment": "We thank the reviewer for their feedback. Indeed, we agree that a significance analysis will help and aim to add it to the final version. We also agree that TF-IDF isn\u2019t the strongest IR model (but surprisingly hard to beat for more complex ones in many cases) and are interested in improving our work along this dimension. "}], "comment_replyto": ["8M0VEr58c", "ZdCIZyh4C9u", "uPkVYkBQqjs"], "comment_url": ["https://openreview.net/forum?id=025X0zPfn&noteId=0d84AHrbwkE", "https://openreview.net/forum?id=025X0zPfn&noteId=QK8VU8TIVd_", "https://openreview.net/forum?id=025X0zPfn&noteId=9VkTi-sncvJ"], "meta_review_cdate": 1588281042090, "meta_review_tcdate": 1588281042090, "meta_review_tmdate": 1588341536486, "meta_review_ddate ": null, "meta_review_title": "Paper Decision", "meta_review_metareview": "This paper studies how factual predictions of a Masked Language Model (MLM) are influenced by appending additional context via various context construction methods. The work presents a set of interesting probes for the analysis, with good justification on the probe design. The paper is well written, clear, and provides good insights on understanding and improving MLM.", "meta_review_readers": ["everyone"], "meta_review_writers": ["AKBC.ws/2020/Conference/Program_Chairs"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=025X0zPfn&noteId=7lqLAYxGXiP"], "decision": "Accept"}