{"forum": "shkmWLRBXH", "submission_url": "https://openreview.net/forum?id=shkmWLRBXH", "submission_content": {"keywords": ["Knowledge Graph Embedding", "Isomorphism", "Rules", "Common Sense", "Implication Rules", "Knowledge Graph", "Isomorphic Embedding", "Semantics Mining", "Rule Mining"], "TL;DR": "We propose TransINT, a novel and interpretable KG embedding method that isomorphically preserves the implication ordering among relations in the embedding space in an explainable, robust, and geometrically coherent way.", "authorids": ["AKBC.ws/2020/Conference/Paper87/Authors"], "title": "TransINT: Embedding Implication Rules in Knowledge Graphs with Isomorphic Intersections of Linear Subspaces", "authors": ["Anonymous"], "pdf": "/pdf/d64925c5300bfb06bd322151792fce8d79df3d89.pdf", "subject_areas": ["Knowledge Representation, Semantic Web and Search", "Applications", "Relational AI"], "abstract": "Knowledge Graphs (KG), composed of entities and relations, provide a structured representation of knowledge. For easy access to statistical approaches on relational data, multiple methods to embed a KG into f(KG) \u2208 R^d have been introduced. We propose TransINT, a novel and interpretable KG embedding method that isomorphically preserves the implication ordering among relations in the embedding space. Given implication rules, TransINT maps set of entities (tied by a relation) to continuous sets of vectors that are inclusion-ordered isomorphically to relation implications. With a novel parameter sharing scheme, TransINT enables automatic training on missing but implied facts without rule grounding. On a benchmark dataset, we outperform the best existing state-of-the-art rule integration embedding methods with significant margins in link Prediction and triple Classification. The angles between the continuous sets embedded by TransINT provide an interpretable way to mine semantic relatedness and implication rules among relations.", "paperhash": "anonymous|transint_embedding_implication_rules_in_knowledge_graphs_with_isomorphic_intersections_of_linear_subspaces"}, "submission_cdate": 1581705819730, "submission_tcdate": 1581705819730, "submission_tmdate": 1587149271323, "submission_ddate": null, "review_id": ["rJDzMd54fIu", "3caY-2ftqJ1", "LDNrKRLwD0t"], "review_url": ["https://openreview.net/forum?id=shkmWLRBXH¬eId=rJDzMd54fIu", "https://openreview.net/forum?id=shkmWLRBXH¬eId=3caY-2ftqJ1", "https://openreview.net/forum?id=shkmWLRBXH¬eId=LDNrKRLwD0t"], "review_cdate": [1585346920498, 1585364468829, 1585522166020], "review_tcdate": [1585346920498, 1585364468829, 1585522166020], "review_tmdate": [1585695491621, 1585695491359, 1585695491091], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["AKBC.ws/2020/Conference/Paper87/AnonReviewer1"], ["AKBC.ws/2020/Conference/Paper87/AnonReviewer2"], ["AKBC.ws/2020/Conference/Paper87/AnonReviewer3"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["shkmWLRBXH", "shkmWLRBXH", "shkmWLRBXH"], "review_content": [{"title": "Good motivation but evaluation on more datasets would be more convincing", "review": "This work proposes a new knowledge graph embedding method in the Trans- family, that ensures the implication ordering of relations in the embedding space. The evaluation of the proposed model is done on a single dataset - FB122, in which it outperforms previous Trans models.\n\nPros:\n- The proposed method is well motivated and described in detail. \n- In the current evaluation, the proposed method outperforms the previous model in the same family with a large margin. \n- The resulted embeddings seem to encode some semantic relatedness which can be considered interpretable. Here the claim of verifying the hypothesis could be backed with more examples than the few in table 3 - by placing them in the appendix rather than \u201cour code repository\u201d.\n- It seems that the code will be published, although this statement is not explicitly made. \n\nCons:\n- The evaluation is done only on one dataset, while related work evaluates their methods on other datasets such as WN18 and NELL. Why aren\u2019t other standard datasets considered - such as WN18? Are there limitations of the model that are not discussed or what? \n- More details on the implementation/evaluation would be nice:\n-- In terms of optimization, only the standard loss is discussed. In section \u201cAutomatic Grounding of Positive Triples\u201d - how exactly the implication constraints applied during training? \n-- What does mean \u201cwe create 100 mini-batches of the training set\u201d.What is the size of FB122 and why exactly 100 mini-batches?\n-- What framework is used for the implementation?\n-- What strategy and how many configurations are used for finding the optimal hyper-parameters?\n\nMinor: \nThe distance between Tables 2 and 3 does not seem natural and is making the reading hard. The column in Table 3 is called \u201cimb\u201d, but there is enough space to write the whole name. \n\n\n", "rating": "6: Marginally above acceptance threshold", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}, {"title": "Interesting angle but not very clear and not enough evidence", "review": "The paper proposes the idea of a new knowledge graph embedding technique that builds on top of TransH and incorporate implication ordering. Results show that it outperforms the previous state-of-the-art method on link prediction and triple classification tasks on FB122. The idea and the new angle of looking at TransH are interesting, but the paper needs lots of revision in terms of clarity and formatting. More experiments could also be included to better show strength.\n\nPros:\n- The use of the idea that relations can be viewed as sets of pairs of entities is intriguing and different from most previous KG embedding approaches\n- The new angle provided for TransH embedding is also worth learning\n\nCons:\n- Readability of this paper is low, due to intentionally adding lots of \"-vspace\" (or equivalent) to fit in the page limit. While I understand the limit is mandatory, lots of the sentences could be rephrased and figures could be rearranged instead of removing white space between lines/formulas/figures/tables, which will make it very hard for readers to follow. \n- The paper is not very clearly written. Some sentences, such as the last sentence in Intro paragraph 1 and the second sentence from Intro paragraph 2, are not clear to me even after finish reading the paper. Also, some figures are not very clearly illustrated and the same goes for the captions, especially figures 2 and 3. Typos and unclosed parenthesis also exist.\n- Experiments, although shows promising results, may not be comprehensive enough to show the strength of the proposed model. It only tests on a subset of the KG embedding tasks and compares TransINT with a few models.\n\nIn summary, this paper needs to be improved in terms of clarity, readability, and the strength of experiments and is currently not ready for publication at AKBC. ", "rating": "4: Ok but not good enough - rejection", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}, {"title": "Good focused contribution, writing can be significantly improved", "review": "This work describes a new approach to learn KG embeddings by preserving the \"implication ordering\" among relations in the embedding space. The paper provides a cute new interpretation of TransH and then uses it to extend TransH to learn entity and relation representations. The crucial novelty of the approach is to map relations into linear subspaces whose parameters are tied using the implication ordering of relations.\n\nThe paper has clear value and the experiments are sound (although only on one dataset). I find it hard to evaluate its novelty at the moment because there is a clear lack in discussion of prior work in this paper, but I like it overall.\n\nQuestion for authors: Is this the first method that uses the implication ordering of relations for KG representation learning?\nWhy is the method only compared to only trans based rule integration methods? Why are the other methods not discussed and compared against?\n\nI think the related work section of this paper needs to be significantly expanded and the method should be contrasted with more works both in the experiments as well in the related work discussion.\n\nAlso, why are all the evaluations only one the small FB122 dataset. Why are larger datasets not considered?\n\nWhere does the ordering of relations come from? I could not find this in the paper.\n\nThe paper is very compressed - the spacing between many lines is very low. I would recommend cutting down on sections 1, 2.1 and 2.2 and expanding on the rest.\n\nWhat are the number of parameters in this model. How does it compare to the original TransH method? How does the performance change with less/more data.\n\npage 6: what is c_i? why is this needed?\n\nThe paper describes a hard parameter sharing scheme. Can we compare this to softer constraints or other variations of parameter sharing as an ablation study?\n\nI find the claim of the \"angles between the continuous sets as interpretable\" suspicious. How is this more interpretable than other methods which use vector operations or distances for the same? Particularly, how would we quantify this claim?\n\nTypos:\npage 5:\n r_i instead of r_1\nscalars instead of scalar\npage 7:\nignores instead of ignore\npage 9:\nthe tables should have TransINT^G and TransINT^NG\npage 10:\nsimilarity", "rating": "6: Marginally above acceptance threshold", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}], "comment_id": ["bQnZleO8cwo", "FwwazwiNqME", "FL-nW1_MgxS", "dLQLB2s9guW", "klZQVsxKL5s"], "comment_cdate": [1588187994906, 1587149757167, 1587149690267, 1587149360403, 1587149645952], "comment_tcdate": [1588187994906, 1587149757167, 1587149690267, 1587149360403, 1587149645952], "comment_tmdate": [1588187994906, 1587149757167, 1587149690267, 1587149654694, 1587149645952], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["AKBC.ws/2020/Conference/Paper87/AnonReviewer1", "AKBC.ws/2020/Conference"], ["AKBC.ws/2020/Conference/Paper87/Authors", "AKBC.ws/2020/Conference"], ["AKBC.ws/2020/Conference/Paper87/Authors", "AKBC.ws/2020/Conference"], ["AKBC.ws/2020/Conference/Paper87/Authors", "AKBC.ws/2020/Conference"], ["AKBC.ws/2020/Conference/Paper87/Authors", "AKBC.ws/2020/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Updates highlighting", "comment": "Thank you for the updates and for addressing mine and other reviewer's questions. However, it would have been nice to highlight the changes using a blue font or some other way because it is hard to see them. "}, {"title": "Thank you for your feedback. Here are our responses.", "comment": "Thank you very much for giving insightful feedback on our work. We have updated the paper with new experiments (significantly outperforming SimplE+ on NELL Sport/ Location) and better readability - please check the common response in the comment above and our updated paper! We have also fixed the \\vspace\u2019s and unnatural distances between tables. Here are our responses to your questions. \n\n1. In terms of optimization, only the standard loss is discussed. In section \u201cAutomatic Grounding of Positive Triples\u201d - how exactly the implication constraints applied during training? \n\nIn Section 4.2, we had meant that our parameter sharing initialization automatically applies implication constraints throughout all stages of training, which enables the use of the standard loss and no special treatment during training. We agree with you that this point was not explained well enough, and improved the clarity of Section 4.2. \n\n2. What does mean \u201cwe create 100 mini-batches of the training set\u201d.What is the size of FB122 and why exactly 100 mini-batches?\n\nFB122 has 9738 entities and 122 relations. Its training/ validation/ test set respectively have 91638, 9595, and 11243 (head, relation, tail) entries. In creating 100 mini-batches, we followed the protocol of KALE, which experimented on the same dataset. \t\t\n\n 3. What framework is used for the implementation?\n\nWe used Pytorch. We now mention this in the beginning of the \u201cExperiments\u201d section. \n \n4. What strategy and how many configurations are used for finding the optimal hyper-parameters?\n\nWe made this information more clear in Section 5.1. \n\n[1]: Guo, S.; Wang, Q.; Wang, L.; Wang, B.; and Guo, L. 2016. Jointly embedding knowledge graphs and logical rules. In EMNLP, 192\u2013202.\t\n"}, {"title": "Thank you for your feedback. We updated the paper with more experiments.", "comment": "Thank you very much for giving insightful feedback on our work. We have updated the paper with new experiments (significantly outperforming SimplE+ on NELL Sport/ Location) and better readability - please check the common response in the comment above and our updated paper! \n"}, {"title": "Thank you for your feedback. We revised our draft. ", "comment": "Dear reviewers, \n\nThank you very much for providing detailed and insightful feedback on our work. We have updated the paper with new results on the NELL Sport/ Location dataset [1] with comparison to SimplE+ [2], a state-of-the-art knowledge graph embedding method that integrates implication rules. Our method TransINT again outperformed SimplE+ with a great margin on all metrics. Furthermore, we have also made the paper more readable by improving clarity of sentences and refraining from using \u201c-vspace\u201d\u2019s. \n\nWe will be very thankful if you could check a new version of the paper, and see if the points above are appropriately addressed. Again, we thank you for your detailed feedback. \n\n[1]: Quan Wang, Bin Wang, and Li Guo. 2015. Knowledge base completion using embeddings and rules. In Proceedings of the 24th International Conference on Artificial Intelligence (IJCAI\u201915). AAAI Press, 1859\u20131865.\n[2]: Fatemi, Bahare et al. \u201cImproved Knowledge Graph Embedding using Background Taxonomic Information.\u201d AAAI (2019).\n"}, {"title": "Thank you for your feedback. Here are our responses.", "comment": "Thank you very much for giving insightful feedback on our work. Here are our responses to your questions. We have updated the paper with new experiments (significantly outperforming SimplE+ on NELL Sport/ Location) and better readability - please check the common response in the comment above and our updated paper! We have also fixed the typos/ mistakes that you (thankfully) have caught and mentioned. \n\n1. Is this the first method that uses the implication ordering of relations for KG representation learning?\n\nAnswer: No, works such as [1,2,3,4,5] have previously used implication ordering of relation for KG representation learning. We improved the \u201cRelated Work\u201d section, by including more previous works that have used implication ordering, stating our method\u2019s over them, and enhancing the readability. \n\n2. Where does the ordering of relations come from? I could not find this in the paper. \n\nTransINT, like other rule-integrating KG embedding methods [1, 2], assumes pre-defined rules (such as is_father_of -> is_parent_of) to be given together with knowledge graph training data. We have made this more clear by adding in the abstract and the introduction, by adding \u201cgiven implication rules\u201d before explaining TransINT\u2019s mechanism.\n\n3. What are the number of parameters in this model. How does it compare to the original TransH method? How does the performance change with less/more data.\n\nTransINT always has the same number of parameters as TransH, because both TransINT and TransH assign 2 d-dimensional vectors to every relation, and one d-dimensional vector to every entity. We explained this with better readability in section 4.1. \n\nBecause we tested our models against benchmark datasets, we did not do an ablation study by changing the amount of training data. However, NELL sports/ locations (a dataset that we newly included results on) is a much smaller dataset than FB 122; TransINT had no problem learning with this smaller dataset. \n\n4. The paper describes a hard parameter sharing scheme. Can we compare this to softer constraints or other variations of parameter sharing as an ablation study?\n\nWhile the question is valid and insightful, I do not think ablation study with softer constraints is possible on our method. However, the methods we compare with (KALE [1] and SimplE+ [2]), are both methods with softer constraints, both of which we outperform with significant gaps in our experiments. Especially, SimplE+ is based on SimplE, a very new and powerful knowledge graph embedding method, but still underperforms compared to our method. We think this is one piece of evidence that hard constraints can be very powerful. \n\n5. I find the claim of the \"angles between the continuous sets as interpretable\" suspicious. How is this more interpretable than other methods which use vector operations or distances for the same? Particularly, how would we quantify this claim?\n\nWe agree that we cannot quantify or claim that the angles between our linear subspaces is a more powerful/ interpretable measure than other kinds of operations for semantic-relatedness mining used in other methods. However, we still think that our angle-based approach has merits in two senses. First, the angle between any two relations can be efficiently calculated with the inner product of their orthogonal subspaces. Furthermore, angle is a geometrically intuitive measure. \n\n[1]: Guo, S.; Wang, Q.; Wang, L.; Wang, B.; and Guo, L. 2016. Jointly embedding knowledge graphs and logical rules. In EMNLP, 192\u2013202.\t\n[2]: Fatemi, Bahare et al. \u201cImproved Knowledge Graph Embedding using Background Taxonomic Information.\u201d AAAI (2019).\n[3]: Ivan Vendrov, Jamie Ryan Kiros, Sanja Fidler, and Raquel Urtasun. Order-embeddings of images and language. CoRR, abs/1511.06361, 2015.\n[4]: Luke Vilnis, Xiang Li, Shikhar Murty, and Andrew McCallum. Probabilistic embedding of knowledge graphs with box lattice measures. arXiv preprint arXiv:1805.06627, 2018. \n[5]: Tim Rockt \u0308aschel, Sameer Singh, and Sebastian Riedel. Injecting logical background knowledge into embeddings for relation extraction. In Proceedings of the 2015 Con- ference of the North American Chapter of the Association for Computational Linguis- tics: Human Language Technologies, pages 1119\u20131129, Denver, Colorado, May\u2013June 2015. Association for Computational Linguistics. doi: 10.3115/v1/N15-1118. \n"}], "comment_replyto": ["dLQLB2s9guW", "rJDzMd54fIu", "3caY-2ftqJ1", "shkmWLRBXH", "LDNrKRLwD0t"], "comment_url": ["https://openreview.net/forum?id=shkmWLRBXH¬eId=bQnZleO8cwo", "https://openreview.net/forum?id=shkmWLRBXH¬eId=FwwazwiNqME", "https://openreview.net/forum?id=shkmWLRBXH¬eId=FL-nW1_MgxS", "https://openreview.net/forum?id=shkmWLRBXH¬eId=dLQLB2s9guW", "https://openreview.net/forum?id=shkmWLRBXH¬eId=klZQVsxKL5s"], "meta_review_cdate": 1588281111971, "meta_review_tcdate": 1588281111971, "meta_review_tmdate": 1588341533007, "meta_review_ddate ": null, "meta_review_title": "Paper Decision", "meta_review_metareview": "This work proposes a new knowledge graph embedding method in the Trans- family, that ensures the implication ordering of relations in the embedding space. The proposed idea on viewing relations as sets of pairs of entities is interesting and provides new perspective as compared to previous KG embedding approaches. The technical content is well explained and justified. There are concerns from the reviewers on experiments and writing. The author has revised the draft to incorporate the review comments. ", "meta_review_readers": ["everyone"], "meta_review_writers": ["AKBC.ws/2020/Conference/Program_Chairs"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=shkmWLRBXH¬eId=b2MArLsQS7e"], "decision": "Accept"}