{"forum": "SkxE1b56TQ", "submission_url": "https://openreview.net/forum?id=SkxE1b56TQ", "submission_content": {"title": "Learning Relational Representations by Analogy using Hierarchical Siamese Networks", "authors": ["Gaetano Rossiello", "Alfio Gliozzo", "Robert Farrell", "Nicolas Fauceglia", "Michael Glass"], "authorids": ["gaetano.rossiello@uniba.it", "gliozzo@us.ibm.com", "robfarr@us.ibm.com", "nicolas.fauceglia@gmail.com", "mrglass@us.ibm.com"], "keywords": ["relation extraction", "textual representation", "siamese network", "one-shot learning", "transfer learning"], "TL;DR": "", "abstract": "We address relation extraction as an analogy problem by proposing a novel approach to learn representations of relations expressed by their textual mentions. In our assumption, if two pairs of entities belong to the same relation, then those two pairs are analogous. Following this idea, we collect a large set of analogous pairs by matching triples in knowledge bases with web-scale corpora through distant supervision. We leverage this dataset to train a hierarchical siamese network in order to learn entity-entity embeddings which encode relational information through the different linguistic paraphrasing expressing the same relation. We evaluate our model in a one-shot learning task by showing a promising generalization capability in order to classify unseen relation types, which makes this approach suitable to perform automatic knowledge base population with minimal supervision. Moreover, the model can be used to generate pre-trained embeddings which provide a valuable signal when integrated into an existing neural-based model by outperforming the state-of-the-art methods on a downstream relation extraction task.", "pdf": "", "archival status": "", "subject areas": [], "paperhash": "rossiello|learning_relational_representations_by_analogy_using_hierarchical_siamese_networks", "_bibtex": "@inproceedings{\nrossiello2019learning,\ntitle={Learning Relational Representations by Analogy using Hierarchical Siamese Networks},\nauthor={Gaetano Rossiello and Alfio Gliozzo and Robert Farrell and Nicolas Fauceglia and Michael Glass},\nbooktitle={Automated Knowledge Base Construction (AKBC)},\nyear={2019},\nurl={https://openreview.net/forum?id=SkxE1b56TQ}\n}"}, "submission_cdate": 1542459612213, "submission_tcdate": 1542459612213, "submission_tmdate": 1581102397732, "submission_ddate": null, "review_id": [], "review_url": [], "review_cdate": [], "review_tcdate": [], "review_tmdate": [], "review_readers": [], "review_writers": [], "review_reply_count": [], "review_replyto": [], "review_content": [], "comment_id": ["BJlXhHSVVE", "SklUaO--4E", "rylZkDZZ4N", "H1x31SIyNN", "BkgZbq9CXV", "Hkey-SjC7V", "S1e6-OcA7N"], "comment_cdate": [1549190570713, 1548978365992, 1548977880679, 1548866787692, 1548818937483, 1548821751132, 1548818436533], "comment_tcdate": [1549190570713, 1548978365992, 1548977880679, 1548866787692, 1548818937483, 1548821751132, 1548818436533], "comment_tmdate": [1549190570713, 1548978365992, 1548977949936, 1548866787692, 1548821875848, 1548821751132, 1548818436533], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["AKBC.ws/2019/Conference/Paper14/Area_Chair1", "AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference/Paper14/Authors", "AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference/Paper14/Authors", "AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference/Paper14/AnonReviewer1", "AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference/Paper14/Authors", "AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference/Paper14/Authors", "AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference/Paper14/Authors", "AKBC.ws/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Justify high rating", "comment": "Dear reviewer,\n\nCould you please expand on why you believe this paper is among the top 15% of papers?\n\nThanks,\nAC"}, {"title": "New version", "comment": "We thank all reviewers for the remarkable comments. A new version of the paper has been uploaded by following their suggestions."}, {"title": "RE: Response to rebuttal", "comment": "We agree with your decision. Thank you again for your valuable feedback. We uploaded a new version by including your suggestion. Anyway, we want to point out that, currently, we cannot perform the analysis of the possible overlaps on the transfer learning experiment just because we did not log the entity pairs (randomly) selected during the training phase. We will re-train the model by considering your concern."}, {"title": "Response to rebuttal", "comment": "Thanks for the detailed response! I think it will be good to add to section 5.3 that there is no overlap between the 3 test sets and the training set. The reasons pointed out for the transfer experiment all make sense, but without an exact analysis of whether there is an overlap, and what effect it might have, I will stick to my original score."}, {"title": "Rebuttal", "comment": "We appreciate your review. Thank you.\n\nWe are planning to pre-initialize our model with contextual word embeddings (BERT/ELMo), and compare them with the same model but using Glove vectors.\n\nYes, the placeholders and the position embeddings mentioned in 'Conclusion and Future Work' (Section 6) refer to the explicit information about the relational arguments. We are running experiments and will share the results soon."}, {"title": "Rebuttal", "comment": "Thank you for the detailed review. We appreciate the opportunity to respond to your remarks.\n\nRegarding the difference between points (1) and (2), in the first point, we refer to the distribution of the instances for each relation type in the KGs, such as Wikidata and DBpedia. In fact, the distantly supervised datasets built using those KGs are usually unbalanced. The second point refers to domains where the training sets for RE have to be manually curated. In this case, collecting enough relation examples to train a standard neural-based RE classifier requires a considerable effort. Those points motivate our work.\n\nThe comparison of our one-shot technique with the zero-shot RE through querification by (Levy et al., 2017) as well as with USchema is a very interesting idea which deserves more investigations. We agree on this point.\u00a0\n\nCurrently, the HSN does not differentiate between entity and non-entity tokens. As we pointed out in Section 6, we are trying to differentiate them using placeholders for the entities.\n\nWe added the work by (Liu, 2017) as related work in the new version. Thank you for this suggestion.\n\nWe think of the context vector as a way to help the attention mechanism give higher weights to those words in the mentions which better express a specific relation. For this reason, we use it at the word level. We already tried it at the relation level as you suggested, but without substantial improvements. Anyway, we understand your remark and we would like to deeply investigate this aspect.\n\nThank you for raising the interesting question about the possible overlaps of the relation instances across the three datasets. For the one-shot experiment, as we point out in the 'One-shot trials' paragraph of Section 5.3, we randomly choose 20 entity-pairs from the long-tailed relation types for each dataset in order to prevent the overlap. Our inspection did not reveal overlaps. We did not perform this analysis when we used the pre-trained analogy embeddings for the standard RE task. However, there are 3 reasons we believe this might not impact our results: (1) During training of the analogy model, we choose only a few entity-pairs for each relation type in T-REX, and then we compute all combinations amongst them. Thus, the probability of overlapping with the test sets of NYT-FB and CC-DBP is very low. (2) Even if some entity-pairs during training might occur in the two test sets, they are highly likely to have different textual mentions since the corpora are different among the datasets. So, the inputs would not overlap in this case. (3) The analogy embeddings are trained differently than standard neural-based RE embeddings, such as PCNN-KI. Despite these points, we plan to perform a detailed analysis w.r.t. this point and, eventually, re-run the evaluation after filtering any duplicate entity-pairs to address your concern.\u00a0"}, {"title": "Rebuttal", "comment": "Thank you for your feedback. Your comments are well received.\n\nThe related work section has been updated by following your suggestion regarding the USchema. We also added the references about the column-less and row-less variants. Thank you for underlining this important point.\n\nThe model was trained solely on T-REX (Wikidata/Wikipedia) in order to evaluate its transfer capability across different textual corpora. In the paragraph 'Results and Discussion' of Section 5.3, we discuss this aspect. However, we would like to repeat the experiments,\u00a0 re-training the model on each dataset independently, to see how it performs.\u00a0 This is a good direction for future analysis.\n\nThe reference (Jameel et al., 2017) has been added to the new version."}], "comment_replyto": ["B1lmldphM4", "SkxE1b56TQ", "H1x31SIyNN", "Hkey-SjC7V", "HkxLTQxrMN", "BkxdvJNmzE", "r1g1OqZ5fV"], "comment_url": ["https://openreview.net/forum?id=SkxE1b56TQ¬eId=BJlXhHSVVE", "https://openreview.net/forum?id=SkxE1b56TQ¬eId=SklUaO--4E", "https://openreview.net/forum?id=SkxE1b56TQ¬eId=rylZkDZZ4N", "https://openreview.net/forum?id=SkxE1b56TQ¬eId=H1x31SIyNN", "https://openreview.net/forum?id=SkxE1b56TQ¬eId=BkgZbq9CXV", "https://openreview.net/forum?id=SkxE1b56TQ¬eId=Hkey-SjC7V", "https://openreview.net/forum?id=SkxE1b56TQ¬eId=S1e6-OcA7N"], "meta_review_cdate": null, "meta_review_tcdate": null, "meta_review_tmdate": null, "meta_review_ddate ": null, "meta_review_title": null, "meta_review_metareview": null, "meta_review_readers": null, "meta_review_writers": null, "meta_review_reply_count": null, "meta_review_url": null, "decision": null}