AMSR / conferences_raw /akbc20 /AKBC.ws_2020_Conference_iHXV8UGYyL.json
mfromm's picture
Upload 3539 files
fad35ef
raw
history blame contribute delete
No virus
11.9 kB
{"forum": "iHXV8UGYyL", "submission_url": "https://openreview.net/forum?id=iHXV8UGYyL", "submission_content": {"keywords": ["Entity linking", "Pre-training", "Wikification"], "TL;DR": "We achieve state of the art on CoNLL and TAC-KBP 2010 with a four layer transformer", "authorids": ["AKBC.ws/2020/Conference/Paper83/Authors"], "title": "Empirical Evaluation of Pretraining Strategies for Supervised Entity Linking", "authors": ["Anonymous"], "pdf": "/pdf/5b3c66a2a64a2cebdf77bc867a4f5c31674406d3.pdf", "subject_areas": ["Information Extraction", "Machine Learning"], "abstract": "In this work, we present an entity linking model which combines a Transformer architecture with large scale pretraining from Wikipedia links. Our model achieves the state-of-the-art on two commonly used entity linking datasets: 96.7% on CoNLL and 94.9% on TAC-KBP. We present detailed analyses to understand what design choices are important for entity linking, including choices of negative entity candidates, Transformer architecture, and input perturbations. Lastly, we present promising results on more challenging settings such as end-to-end entity linking and entity linking without in-domain training data", "paperhash": "anonymous|empirical_evaluation_of_pretraining_strategies_for_supervised_entity_linking"}, "submission_cdate": 1581705817867, "submission_tcdate": 1581705817867, "submission_tmdate": 1581709232804, "submission_ddate": null, "review_id": ["qg7mXNz7rh4", "R_p7J_x-ID", "AArX9k7PJGx"], "review_url": ["https://openreview.net/forum?id=iHXV8UGYyL&noteId=qg7mXNz7rh4", "https://openreview.net/forum?id=iHXV8UGYyL&noteId=R_p7J_x-ID", "https://openreview.net/forum?id=iHXV8UGYyL&noteId=AArX9k7PJGx"], "review_cdate": [1585321107141, 1585370361637, 1585496668875], "review_tcdate": [1585321107141, 1585370361637, 1585496668875], "review_tmdate": [1585695495768, 1585695495510, 1585695495242], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["AKBC.ws/2020/Conference/Paper83/AnonReviewer2"], ["AKBC.ws/2020/Conference/Paper83/AnonReviewer3"], ["AKBC.ws/2020/Conference/Paper83/AnonReviewer1"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["iHXV8UGYyL", "iHXV8UGYyL", "iHXV8UGYyL"], "review_content": [{"title": "thorough analysis of pre-training strategies for transformer-based entity linking", "review": "The paper describes an evaluation of several pre-training strategies for the task of entity linking, using the AIDA and TAC-KBP baselines. In particular, the authors look at the impact of entity candidate selection strategies, adding noise during pre-training, and context selection methods. The model employed for entity disambiguation is a 4-layer transformer for the language representation, with an MLP final layer to perform disambiguation. The analysis of the pre-training strategies is detailed, and could be interesting for others using the transformer architecture to perform entity linking. Minor issue, but the paper is missing a conclusion section - this could be used to discuss how these results can generalize to other methods for entity linking.", "rating": "7: Good paper, accept", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}, {"title": "Solidly done piece of empirical work", "review": "This paper investigates the use of a simple architecture for entity disambiguation: encode the mention and its context with BERT, use an MLP over the mention's fenceposts to compute an embedding, then compare that embedding with embeddings of entity candidates and take the one with the highest dot product. Notably, it uses a transformer pre-trained on Wikipedia to do entity resolution, but does *not* use the BERT model or its pre-trained parameters directly. The paper deals with several design decisions along the way: how to pre-train this model on Wikipedia, how to generate candidates at train and test time, whether or not to mask the input as in BERT, and other hyperparameters. Results show state-of-the-art performance on CoNLL (with a good candidate set) and TAC-KBP, as well as good performance on end-to-end entity linking (detecting and linking mentions).\n\nThis paper isn't exceptionally creative. However, it's a solidly done piece of empirical work that in my opinion should exist in the literature. While a lot of work has moved onto zero-shot settings (Ling et al./Wu et al./Logeswaran et al. that the authors cite, plus Onoe and Durrett \"Fine-Grained Entity Typing for Domain Independent Entity Linking\") or other embedding-based formulations (Mingda Chen et al. \"EntEval: A Holistic Evaluation Benchmark for Entity Representations\"), a strong, conventional, up-to-date supervised baseline should exist in the literature and currently doesn't.\n\nThe one idea here that seems unconventional is foregoing BERT-based pre-training and only pre-training on the entity linking task itself. This is an interesting choice but I'm not too surprised it works well: Wikipedia is already pretty big, and this approach lets you learn good entity embeddings in the same space as the transformer encoder.\n\nThe experiments in this paper are quite well-done and touch on a lot of issues surrounding how the system is trained. I'm glad to see the authors use the TAC-KBP data and start to make this more standard -- it would've been nice to see other datasets like WikilinksNED or some of the older/smaller datasets from Ratinov et al. (2011) \"Local and Global Algorithms for Disambiguation to Wikipedia\"). The CoNLL data is weird and limited in scope. Nevertheless, achieving state-of-the-art on this well-worn dataset is impressive.\n\nTable 4 was probably the most surprising part of the paper to me. It's a little strange that the OURS candidate selection method works poorly on TAC-KBP. It basically seems like a union of phrase table and page, right? I understand the paper's high-level point that random is somehow closer to the true TAC-KBP task, but this argument seems handwavy and doesn't seem like it should make such a large difference.\n\nBERT-style noise is also surprisingly effective during pre-training. The paper's interpretation of this makes sense.\n\nOverall, I feel like this paper deserves to be published: the results will be a good benchmark for future efforts and I can imagine other researchers using this as a starting point.", "rating": "7: Good paper, accept", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "useful analysis of pretraining strategies for supervised entity linking", "review": "This paper presents an empirical study of pretraining strategies for supervised entity linking. Previous works either focus on constructing general-purpose entity representations or zero-shot entity linking and do not fully explore pretraining. The paper is well written and is easy to follow. I think the findings in the paper should be of interest to the AKBC community.\n\nThe proposed model achieves competitive performance even without domain-specific tuning. A detailed empirical analysis of negative candidate selection, noise addition, and context selection is presented. The proposed model is able to perform end-to-end entity linking with simple modeling and low inference cost.\n\nMissing comparison with related work on end-to-end entity linking with BERT:\nInvestigating Entity Knowledge in BERT with Simple Neural End-To-End Entity Linking, Samuel Broscheit, CoNLL'19\n\nA figure / running example to illustrate where the demonstrated benefits of pretraining come from over prior art will strengthen the paper.\n\nA discussion on potential limitations of pretraining will also be informative.\n", "rating": "7: Good paper, accept", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}], "comment_id": ["3E3xZSRru_8", "vUuoo5axB1h", "CoARZ7meq0", "bWYrz2DJcPY"], "comment_cdate": [1586383404735, 1586383383596, 1586383359868, 1586383332767], "comment_tcdate": [1586383404735, 1586383383596, 1586383359868, 1586383332767], "comment_tmdate": [1586383404735, 1586383383596, 1586383359868, 1586383332767], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["AKBC.ws/2020/Conference/Paper83/Authors", "AKBC.ws/2020/Conference"], ["AKBC.ws/2020/Conference/Paper83/Authors", "AKBC.ws/2020/Conference"], ["AKBC.ws/2020/Conference/Paper83/Authors", "AKBC.ws/2020/Conference"], ["AKBC.ws/2020/Conference/Paper83/Authors", "AKBC.ws/2020/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Response", "comment": "Thank you for your review. We will incorporate your suggestion about an expanded discussion and conclusion in our final version."}, {"title": "Response", "comment": "Thank you for your comments. In regards to Table 4, since no alias table is used for inference in the TAC-KBP dataset, we believe a random sample of negatives is closer to the full softmax used at inference time, whereas the biased sample introduced by candidates is better suited to situations where an alias table is present. However, we agree that further investigation would be required to confirm this intuition."}, {"title": "Response", "comment": "Thank you for your comments and for pointing us to this related work. We will add an explicit comparison to Broscheit\u2019s work and incorporate your suggestions for improved clarity in our final version."}, {"title": "Response", "comment": "Thank you for pointing us to this work, we will be sure to include a comparison and discussion in our final paper. We do finetune our model on each of the final datasets -- this will be made clearer."}], "comment_replyto": ["qg7mXNz7rh4", "R_p7J_x-ID", "AArX9k7PJGx", "0cuwD50xl9q"], "comment_url": ["https://openreview.net/forum?id=iHXV8UGYyL&noteId=3E3xZSRru_8", "https://openreview.net/forum?id=iHXV8UGYyL&noteId=vUuoo5axB1h", "https://openreview.net/forum?id=iHXV8UGYyL&noteId=CoARZ7meq0", "https://openreview.net/forum?id=iHXV8UGYyL&noteId=bWYrz2DJcPY"], "meta_review_cdate": 1588282172316, "meta_review_tcdate": 1588282172316, "meta_review_tmdate": 1588341537996, "meta_review_ddate ": null, "meta_review_title": "Paper Decision", "meta_review_metareview": "All reviewers agreed that the paper has some strengths with merits outweighing (a few) flaws. \n\nThis paper investigates the use of a simple architecture for entity disambiguation, while exploring several design decisions along the way. Results show state-of-the-art performance on CoNLL (with a good candidate set) and TAC-KBP, as well as good performance on end-to-end entity linking (detecting and linking mentions).\n\nThe strengths of this paper are: (1) competitive performance without domain-specific tuning, (2) extremely well done experiments touching on many related issues (negative candidate selection, noise addition, and context selection). One of the reviewers describes it as \"solidly done piece of experimental work\", which \"will be a good benchmark for future efforts\".\n\nThere are two drawback of the paper. (1) the techniques in this paper by themselves aren't novel. In fact, one cannot attribute a strong technical contribution for this paper. So, if one has to accept the paper it has to be for experiments and analysis and not for the novelty. (2) there is another paper from CONLL'19 which is related. The reviewers liked the experiments in this paper better than the CONLL paper, which are much more thorough in a wider range of experimental settings. \n", "meta_review_readers": ["everyone"], "meta_review_writers": ["AKBC.ws/2020/Conference/Program_Chairs"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=iHXV8UGYyL&noteId=1516yudxgI"], "decision": "Accept"}