{"forum": "Sye0lZqp6Q", "submission_url": "https://openreview.net/forum?id=Sye0lZqp6Q", "submission_content": {"title": "Combining Long Short Term Memory and Convolutional Neural Network for Cross-Sentence n-ary Relation Extraction", "authors": ["Angrosh Mandya", "Danushka Bollegala", "Frans Coenen", "Katie Atkinson"], "authorids": ["angrosh@liverpool.ac.uk", "danushka.bollegala@liverpool.ac.uk", "coenen@liverpool.ac.uk", "katie@liverpool.ac.uk"], "keywords": ["n-ary relation extraction", "information extraction"], "abstract": "We propose in this paper a combined model of Long Short Term Memory and Convolutional Neural Networks (LSTM_CNN) model that exploits word embeddings and positional embeddings for cross-sentence n-ary relation extraction. The proposed model brings together the properties of both LSTMs and CNNs, to simultaneously exploit long-range sequential information and capture most informative features, essential for cross-sentence n-ary relation extraction. The LSTM_CNN model is evaluated on standard datasets on cross-sentence n-ary relation extraction, where it significantly outperforms baselines such as CNNs, LSTMs and also a combined CNN_LSTM model. The paper also shows that the proposed LSTM_CNN model outperforms the current state-of-the-art methods on cross-sentence n-ary relation extraction.", "pdf": "/pdf/c817c475891868c7c1a3d36e53623f42300a5785.pdf", "archival status": "Archival", "subject areas": ["Information Extraction", "Applications: Biomedicine"], "paperhash": "mandya|combining_long_short_term_memory_and_convolutional_neural_network_for_crosssentence_nary_relation_extraction", "_bibtex": "@inproceedings{\nmandya2019combining,\ntitle={Combining Long Short Term Memory and Convolutional Neural Network for Cross-Sentence n-ary Relation Extraction},\nauthor={Angrosh Mandya and Danushka Bollegala and Frans Coenen and Katie Atkinson},\nbooktitle={Automated Knowledge Base Construction (AKBC)},\nyear={2019},\nurl={https://openreview.net/forum?id=Sye0lZqp6Q}\n}"}, "submission_cdate": 1542459638461, "submission_tcdate": 1542459638461, "submission_tmdate": 1580939649247, "submission_ddate": null, "review_id": ["SJxnuy1TZV", "B1eBHYjFgV", "ByxYGXmVME"], "review_url": ["https://openreview.net/forum?id=Sye0lZqp6Q¬eId=SJxnuy1TZV", "https://openreview.net/forum?id=Sye0lZqp6Q¬eId=B1eBHYjFgV", "https://openreview.net/forum?id=Sye0lZqp6Q¬eId=ByxYGXmVME"], "review_cdate": [1546608499915, 1545349436935, 1547084560971], "review_tcdate": [1546608499915, 1545349436935, 1547084560971], "review_tmdate": [1550269647297, 1550269647077, 1550269646857], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["Sye0lZqp6Q", "Sye0lZqp6Q", "Sye0lZqp6Q"], "review_content": [{"title": "Nice experimental study", "review": "The paper addresses cross-sentence n-ary relation extraction. The authors propose a model consisting of an LSTM layer followed by an CNN layer and show that it outperforms other model choices. The experiments are sound and complete and the presented results look convincing. The paper is well written and easy to follow. \nIn total, it presents a nice experimental study.\n\nSome unclear issues / questions for the authors:\n- \"The use of multiple filters facilitates selection of the most important feature for each feature map\": What do you mean with this sentence? Don't you get another feature map for each filter? Isn't the use of multiple filters rather to capture different semantics within the sentence?\n- \"The task of predicting n-ary relations is modeled both as a binary and multi-class classification problem\": How do you do that? Are there different softmax layers? And if yes, how do you decide which one to use?\n- Table 1/2: How can you draw conclusions about the performance on binary and ternary relations from these tables? I can only see the distinction of single sentence and cross sentence there.\n- Table 3: The numbers for short distance spans (mostly 20.0) look suspicious to me. What is the frequency of short/medium/long distance spans in the datasets? Are they big enough to be able to draw any conclusions from them?\n- You say that CNN_LSTM does not work because after applying the CNN all sequential information is lost. But how can you apply an LSTM afterwards then? Is there any recurrence at all? (The sequential information would not be lost after the CNN if you didn't apply pooling. Have you tried that?)\n- Your observation that more than two positional embeddings decrease the performance is interesting (and unexpected). Do you have any insights on this? Does the model pay attention at all to the second of three entities? What would happen if you simply deleted this entity or even some context around this entity (i.e., perform an adversarial attack on your model)?\n\nOther things that should be improved:\n- sometimes words are in the margin\n- there are some typos, e.g., \"mpdel\", \"the dimensions ... was set... and were initialised\", \"it interesting\"", "rating": "7: Good paper, accept", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "Good paper with small modeling improvements but thorough evaluations", "review": "The paper presents a method for n-ary cross sentence relation extraction.\nGiven a list of entities, and a list of sentences the task is to identify which relation (from a predefined list) is described between the entities in the given sentences.\nThe proposed model stacks CNN on LSTM to get long range dependencies in the text, and shows to be effective, either beating or equalling the state-of-the-art on two datasets for the task.\n\nOverall, I enjoyed reading the paper, and would like to see it appear in the conference.\nWhile the proposed model is not very novel, and was shown effective on other tasks such as text classification or sentiment analysis, this is the first time it was applied for this specific task.\nIn addition, I appreciate the additional evaluations, which ablate the different parts of the model, analyze its performance by length between entities, compare it with many variations as baselines and against state-of-the-art for the task.\n\nMy main comments are mostly in terms of presentation - see below.\n\n\nDetailed comments:\n\nIn general, I think that wording can be tighter, and some repetitive information can be omitted. For example, Section 4 could be condensed to highlight the main findings, instead of splitting them across subsections.\nI think that Section 3.1.2 (\u201cPosition Features\u201d) would benefit from an example showing an input encoding.\nTable 5 shows up in the references. \n\nMinor comments and typos:\n\nText on P. 9 overflows the page margins.\nI think that Table 3 would be a little easier to read if the best performance in each column were highlighted in some manner.\nSection 2, p. 3: \u201cmpdel\u201d -> model.\nPerhaps using \u201cFigure 1\u201d instead of \u201cListing 1\u201d is more consistent with *ACL-like papers?\n", "rating": "7: Good paper, accept", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "Review of Combining Long Short Term Memory and Convolutional Neural Network for Cross-Sentence n-ary Relation Extraction", "review": "The paper presents an approach to cross-sentence relation extraction that combines LSTMs and convolutional neural network layers with word and position features. Overall the choices made seem reasonable, and the paper includes some interesting analysis / variations (e.g., showing that an LSTM layer followed by a CNN is a better choice than the other way around).\n\nEvaluation is performed on two datasets, Quirk and Poon (2016) and a chemical induced disease dataset. The paper compares a number of model variations, but there don't appear to be any comparisons to state-of-the-art results on these datasets. The paper could benefit from comparisons to SOTA on these or other datasets.", "rating": "6: Marginally above acceptance threshold", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}], "comment_id": ["HJxPMcj-4E", "SyeSjCqW4E", "Hyg9iQXvmN"], "comment_cdate": [1549019663120, 1549016732657, 1548329889696], "comment_tcdate": [1549019663120, 1549016732657, 1548329889696], "comment_tmdate": [1549019663120, 1549016732657, 1548329928252], "comment_readers": [["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["AKBC.ws/2019/Conference/Paper25/Authors", "AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference/Paper25/Authors", "AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference/Paper25/Authors", "AKBC.ws/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Answers to comments", "comment": "We thank the reviewer for the comments. The following is our response.\n\nQ1: In general, I think that wording can be tighter, and some repetitive information can be omitted. For example, Section 4 could be condensed to highlight the main findings, instead of splitting them across subsections.\nAns: Since we wanted to discuss various aspects of the model, we have used different sub-sections. The title of the sub-section indicates the key aspect of the model, that we wanted to highlight and discuss. We feel that condensing it into one section, can result in difficulty in reading the paper.\n\nQ2: I think that Section 3.1.2 (\u201cPosition Features\u201d) would benefit from an example showing an input encoding.\nAns: We have updated Section 3.1.2 to provide an example of position embedding.\n\nQ3: Table 5 shows up in the references. \nAns: Corrected. Table 5 is moved to the next page\n\nQ4: Typos and margins\nAns: The typos and margins are corrected.\n\n"}, {"title": "Clarification for some unclear issues", "comment": "We thank the reviewer for the comments.\n\nQ1. - \"The use of multiple filters facilitates selection of the most important feature for each feature map\": What do you mean with this sentence? Don't you get another feature map for each filter? Isn't the use of multiple filters rather to capture different semantics within the sentence?\nAns: It is true that the use of multiple features facilitates capturing different semantics within the sentence. We have corrected this section on max-pooling in the paper.\n\nQ2. \"The task of predicting n-ary relations is modeled both as a binary and multi-class classification problem\": How do you do that? Are there different softmax layers? And if yes, how do you decide which one to use?\nAns: The task of predicting n-ary relations as a binary and multi-classification problem is specific to the datasets that are used in the paper. While using the Peng et al. (2016) dataset, we have a multi-class classification problem. However, when we are working with Chemcial-induced dataset, we have a binary classification problem, to predict whether their exists a binary relation between the entities both in single sentence and across sentences. Therefore, when working with Peng et al dataset, we use softmax function with categorical cross entropy loss to output probability over the five output classes (resistance; resistance or no-response; response; sensitivity; and none) and employ softmax layer with binary cross-entropy loss to predict probability over two classes.\n\nQ3: Table 1/2: How can you draw conclusions about the performance on binary and ternary relations from these tables? I can only see the distinction of single sentence and cross sentence there.\nAns: As indicated in the caption for Tables 1 and 2, while Table 1 specifically deals with ternary relations involving ternary relations between drug-gene-mutation, Table 2 deals with binary relations involving drug-gene entities. However, these binary and ternary relations exists both in single sentences and across sentences. Thus the distinction between single and across sentences is made. Given this aspect, the conclusions are drawn for the performance of binary and ternary relations in single sentences and across sentences.\n\nQ4: - Table 3: The numbers for short distance spans (mostly 20.0) look suspicious to me. What is the frequency of short/medium/long distance spans in the datasets? Are they big enough to be able to draw any conclusions from them?\nAns: The reviewer is right in noting that there are very few instances for short distance spans. This is also the reason why the performance of different models are lower for short distance spans, compared to medium and long distance spans.\n\nQ5: - You say that CNN_LSTM does not work because after applying the CNN all sequential information is lost. But how can you apply an LSTM afterwards then? Is there any recurrence at all? (The sequential information would not be lost after the CNN if you didn't apply pooling. Have you tried that?)\nAns: We had conducted experiments removing the max-pooling layer and passing the features to an LSTM layer. However, this did not help in improving the performance.\n\nQ6: Your observation that more than two positional embeddings decrease the performance is interesting (and unexpected). Do you have any insights on this? Does the model pay attention at all to the second of three entities? What would happen if you simply deleted this entity or even some context around this entity (i.e., perform an adversarial attack on your model)?\nAns: Given the experimental results, we observe that adding positional embeddings for more than two entities results in a decrease in the performance. We have not conducted further experiments such as removing the second entity or context around the second entity. Removing the second entity or the context around it would certainly result in a poor performance as we would be disturbing the sequential information. However, it is worthwhile to conduct such experiments and intend to do it in the future.\n\nQ7: Typos\nAns: Thanks for identifying the typos. The typos are corrected."}, {"title": "Comparison with state-of-the-art results", "comment": "We thank the reviewer for the comments.\n\nThe paper, in addition to evaluation of number of model variations, provides a comprehensive evaluation of comparing the proposed model against state-of-the-art results on both datasets. The comparison of results for Quirk and Poon Dataset is provided in Table 4 with the explanation provided in Section 4.4.6. Similarly, the comparison of results for Chemcial-induced dataset is provided in Table 5 and the explanation is provided in Section 4.4.6. As explained in Section 4.4.6, the proposed model clearly outperforms the state-of-the-art results on both datasets.\n\nFurther, since this research specifically considers n-ary relation extraction, these two datasets were considered which involves binary and ternary relation instances. This is the reason we do not perform evaluation on standard intra-sentence relation extraction datasets such as Semeval 2010 Task 8 dataset. Further, this is inline with previous work (Peng et al., 2016) who do not consider evaluation on Semeval 2010 Task 8 relation extraction dataset."}], "comment_replyto": ["B1eBHYjFgV", "SJxnuy1TZV", "ByxYGXmVME"], "comment_url": ["https://openreview.net/forum?id=Sye0lZqp6Q¬eId=HJxPMcj-4E", "https://openreview.net/forum?id=Sye0lZqp6Q¬eId=SyeSjCqW4E", "https://openreview.net/forum?id=Sye0lZqp6Q¬eId=Hyg9iQXvmN"], "meta_review_cdate": 1549923402896, "meta_review_tcdate": 1549923402896, "meta_review_tmdate": 1551128386472, "meta_review_ddate ": null, "meta_review_title": "Simple method that works well on interesting problem", "meta_review_metareview": "The presented cross sentence relation extraction method is simple but well motivated. The experiments show that it works well, setting a new SOTA on the two relevant benchmarks. The ablations and extended analyses are also well done and extensive. Overall, this is a clear paper with a solid contribution that we would all learn something by reading.", "meta_review_readers": ["everyone"], "meta_review_writers": [], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=Sye0lZqp6Q¬eId=S1l7UNOyBV"], "decision": "Accept (Poster)"}