File size: 23,493 Bytes
fad35ef
1
{"forum": "HkyI-5667", "submission_url": "https://openreview.net/forum?id=HkyI-5667", "submission_content": {"title": "Scalable Rule Learning in Probabilistic Knowledge Bases", "authors": ["Arcchit Jain", "Tal Friedman", "Ondrej Kuzelka", "Guy Van den Broeck", "Luc De Raedt"], "authorids": ["arcchit.jain@cs.kuleuven.be", "tal@cs.ucla.edu", "kuzelo1@gmail.com", "guyvdb@cs.ucla.edu", "luc.deraedt@cs.kuleuven.be"], "keywords": ["Database", "KB", "Probabilistic Rule Learning"], "TL;DR": "Probabilistic Rule Learning system using Lifted Inference", "abstract": "Knowledge Bases (KBs) are becoming increasingly large, sparse and probabilistic. These KBs are typically used to perform query inferences and rule mining. But their efficacy is only as high as their completeness. Efficiently utilizing incomplete KBs remains a major challenge as the current KB completion techniques either do not take into account the inherent uncertainty associated with each KB tuple or do not scale to large KBs.\n\nProbabilistic rule learning not only considers the probability of every KB tuple but also tackles the problem of KB completion in an explainable way. For any given probabilistic KB, it learns probabilistic first-order rules from its relations to identify interesting patterns. But, the current probabilistic rule learning techniques perform grounding to do probabilistic inference for evaluation of candidate rules. It does not scale well to large KBs as the time complexity of inference using grounding is exponential over the size of the KB. In this paper, we present SafeLearner -- a scalable solution to probabilistic KB completion that performs probabilistic rule learning using lifted probabilistic inference -- as faster approach instead of grounding. \n\nWe compared SafeLearner to the state-of-the-art probabilistic rule learner ProbFOIL+ and to its deterministic contemporary AMIE+ on standard probabilistic KBs of NELL (Never-Ending Language Learner) and Yago. Our results demonstrate that SafeLearner scales as good as AMIE+ when learning simple rules and is also significantly faster than ProbFOIL+. ", "pdf": "/pdf/2e50b6e8f515de211d3eb6e0c6e75f99bc4a97c3.pdf", "archival status": "Archival", "subject areas": ["Machine Learning", "Databases"], "paperhash": "jain|scalable_rule_learning_in_probabilistic_knowledge_bases", "html": "https://github.com/arcchitjain/SafeLearner/tree/AKBC19", "_bibtex": "@inproceedings{\njain2019scalable,\ntitle={Scalable Rule Learning in Probabilistic Knowledge Bases},\nauthor={Arcchit Jain and Tal Friedman and Ondrej Kuzelka and Guy Van den Broeck and Luc De Raedt},\nbooktitle={Automated Knowledge Base Construction (AKBC)},\nyear={2019},\nurl={https://openreview.net/forum?id=HkyI-5667}\n}"}, "submission_cdate": 1542459718516, "submission_tcdate": 1542459718516, "submission_tmdate": 1580939649734, "submission_ddate": null, "review_id": ["r1eeNpZbzV", "HJxMMAfMGV", "HyxYWizrfV"], "review_url": ["https://openreview.net/forum?id=HkyI-5667&noteId=r1eeNpZbzV", "https://openreview.net/forum?id=HkyI-5667&noteId=HJxMMAfMGV", "https://openreview.net/forum?id=HkyI-5667&noteId=HyxYWizrfV"], "review_cdate": [1546882344207, 1546952201901, 1547148033281], "review_tcdate": [1546882344207, 1546952201901, 1547148033281], "review_tmdate": [1550269661260, 1550269661043, 1550269633287], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["HkyI-5667", "HkyI-5667", "HkyI-5667"], "review_content": [{"title": "review", "review": "This paper was interesting and rather clearly written, as someone who didn't have much background in rule learning.\n\nSection 5.1.1.1 : \"(ii) tuples contained in the answer of Q' where Q' is the same as Q but without the rule with empty body, but not in the training set\"  is unclear.\n\nIn the algorithm, line 22, aren't rules removed until H leads to a \"safe\" UCQ ?\n\nSection 6.1 : \"In line 7 we formulate a UCQ Q from all the candidate rules in H (explained in 5.2 with an example)\". I was unable to find the example in section 5.2 \n\nIt would be interesting to have an idea of the maximum scale that ProbFoil+ can handle, since it seems to be the only competitor to the suggested method. \n\nIn section 7.2 does the \"learning time\" include the call to AMIE+ ? if not, it would be interesting to break down the time into its deterministic and learning components, since the former is only necessary to retrieve the correct probabilities.\n\nBeing new to this subject, I found the paper to be somewhat clear. However, I found that it was sometimes hard to understand what was a part of the proposed system, and what was done in Amie+ or Slimshot.\n\nFor example, \"But, before calling Slimshot\nfor lifted inference on the whole query, we first break down the query to independent subqueries such that no variable is common in more than one sub-query. Then, we perform\ninference separately over it and later unify the sub-queries to get the desired result.\"\n\nThis is described as important to the speedup over ProbFoil+ in the conclusion, yet doesn't appear in Algorithm 1. \n\nSimilarly, \"it caches the structure of queries before doing inference \" is mentioned in the conclusion but I couldn't map it to anything in Algorithm 1 or in the paper.\n\n\nI lean toward an accept because the work seems solid, but I feel like I don't have the background required to judge on the contributions of this paper, which seems to me like a good use of Amie+/Slimshot with a reasonable addition of SGD to learn rule weights. Some of the components which are sold as important for the speed-up in the conclusion aren't clear enough in the main text. Some numbers to experimentally back-up how important these additions are to the algorithms would be welcome. \n", "rating": "6: Marginally above acceptance threshold", "confidence": "1: The reviewer's evaluation is an educated guess"}, {"title": "Good but not surprising", "review": "In general, the paper presents a routing practice, that is, apply lifted probabilistic inference to rule learning over probabilistic KBs, such that the scalability of the system is enhanced but being applicable to a limited scope of rules only. I would not vote for reject if other reviewers agree to acceptance.\n\nSpecifically, the proposed algorithm SafeLearner extends ProbFOIL+ by using lifted probabilistic inference (instead of using grounding), which first applies AMIE+ to find candidate deterministic rules, and then jointly learns probabilities of the rules using lifted inference.\n\nThe paper is structured well, and most part of the paper is easy to follow.\n\nI have two major concerns with the motivation. It reads that there are two challenges associated with rule learning from probabilistic KBs, i.e., sparse and probabilistic nature. \n1) While two challenges are identified by the authors, but the paper deals with the latter issue only? How does sparsity affect the algorithm design?\n\n2) The paper can be better motivated, although there is one piece of existing work for learning probabilistic rules from KBs (De Raedt et al. [2015]). Somehow, I am not convinced by the potential application of the methods; that is, after generating the probabilistic rules, how can I apply the probabilistic rules? It will be appreciated if the authors can present some examples of the use of probabilistic rules. Moreover, if it is mainly to complete probabilistic KBs, how does this probabilistic logics based approach compare against embedding based approach?\n", "rating": "6: Marginally above acceptance threshold", "confidence": "1: The reviewer's evaluation is an educated guess"}, {"title": "Sound approach for rule learning but heavy dependence on black-box algorithm to propose candidate rules", "review": "The paper proposes a model for probabilistic rule learning to automate the completion of probabilistic databases. The proposed method uses lifted inference which helps in computational efficiency given that non-lifted inference in rules containing ungrounded variables can be extremely computationally expensive.\nThe databases used contain binary relations and the probabilistic rules that are learned, are also learned for discovering new binary relations. The use of lifted inference restricts the proposed model to only discover rules that are a union of conjunctive queries.\nThe proposed approach uses AMIE+, a method to generate deterministic rules, to generate a set of candidate rules for which probabilistic weights are then learned. The model initializes the rule probabilities as the confidence scores estimated from the conditional probability of the head being true given that the body is true, and then uses a maximum likelihood estimate of the training data to learn the rule probabilities. \n\nThe paper presents empirical comparison to deterministic rules and ProbFOIL+ on the NELL knowledge base. The proposed approach marginally performs better than deterministic rule learning.\n\nThe approach proposed is straightforward and depends heavily on the candidate rules produced by the AMIE+ algorithm. The paper does not provide insights into the drawbacks of using AMIE+, the kind of rules that will be hard for AMIE+ to propose, how can the proposed method be improved to learn rules beyond the candidate rules. \n\n\u2018End-to-end differentiable proving\u2019 from NeurIPS (NIPS) 2017 also tackles the same problem and it would be nice to see comparison to that work. \n", "rating": "4: Ok but not good enough - rejection", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}], "comment_id": ["rJgHsQcWNN", "BkgjQm5Z44", "HJeYib5WEE", "SkgHhsFZNE"], "comment_cdate": [1549013917038, 1549013795105, 1549013409404, 1549011885311], "comment_tcdate": [1549013917038, 1549013795105, 1549013409404, 1549011885311], "comment_tmdate": [1549013917038, 1549013795105, 1549013409404, 1549011906890], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["AKBC.ws/2019/Conference/Paper54/Authors", "AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference/Paper54/Authors", "AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference/Paper54/Authors", "AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference/Paper54/Authors", "AKBC.ws/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Details on AMIE+ explained", "comment": "We would like to thank the reviewer for the comments.\n\n1) The proposed approach marginally performs better than deterministic rule learning.\n\nThis is true only for PR curves but not for cross entropy. In the section of Parameter Learning, we explain how maximizing expected log likelihood is equivalent to minimizing cross entropy (Equation 3). As our proposed approach optimizes on cross entropy, it induces an average reduction in the cross entropy of 82% and 85% as compared to ProbFOIL+ and AMIE+ respectively. The insignificant differences in precision-recall curves only suggest that obtaining a ranking of tuples based on predicted probabilities (but not the actual probabilities) can be quite reliably done already by models that are not very precise when it comes to predicting the actual probabilities.\n\n\n2) The approach is straightforward and depends heavily on the candidate rules produced by AMIE+.\n\nOur approach is straightforward on purpose as we want to see how much the proper treatment of probabilities in the KB completion task using a rule-based approach helps. At first glance, SafeLearner does seem to be heavily dependent on the rules generated by AMIE+. But AMIE+ is not a black-box as we exactly know the kind of rules we require as candidates. Had AMIE+ not been developed, we could have coded the function ourselves to generate candidate rules. Since AMIE+ is a multi-threaded package in Java, it does the job well by scaling well to large KBs. Furthermore, as SafeLearner is not specific to any particular candidate generation method, it can be used with any other relational rule learner instead of AMIE+.\n\n\n3) The paper does not provide insights into the drawbacks of using AMIE+, the kind of rules that will be hard for AMIE+ to produce.\n\nAs mentioned in the cited paper \u2018Fast rule mining in ontological knowledge bases with AMIE+\u2019 (Galarraga et al.,2015), AMIE+ uses 3 types of language biases to restrict the size of the search space:\n\t1) The rules learned by AMIE+ omit reflexive atoms of the form \u2018x(A, A)\u2019.\n\t2) The rules are connected, i.e., every atom shares at least one variable transitively to every other atom of the rule. This omits vague rules of the form \u2018x(A, B) :- y(C, D)\u2019.\n\t3) The rules are closed, i.e., all the variables in a rule appear at least twice within itself. This omits open rules of the form \u2018x(A, B) :- b(A, C)\u2019 which would hold for any substitution of B and C.\n\nMoreover, AMIE+ only works with binary relations in a KB and does not have negations in its rules. For instance, the types of non-recursive rules of length <= 3, that can be generated by AMIE+ within SafeLearner are:\n\t  1) x(A, B) :- y(A, B).\n    \t  2) x(A, B) :- y(B, A).\n\n    \t  3) x(A, B) :- y(A, C), y(C, B).\n    \t  4) x(A, B) :- y(A, C), y(B, C).\n    \t  5) x(A, B) :- y(C, A), y(C, B).\n    \t  6) x(A, B) :- y(C, A), y(B, C).\n\n    \t  7) x(A, B) :- y(A, C), z(C, B).\n    \t  8) x(A, B) :- y(A, C), z(B, C).\n    \t  9) x(A, B) :- y(C, A), z(C, B).\n     \t10) x(A, B) :- y(C, A), z(B, C).\nIn the context of doing Probabilistic Rule Learning for KB Completion, these are precisely all the forms of rules which we require as we can not practically predict missing tuples using reflexive, disconnected or open rules. SafeLearner is capable of learning rules of any length by specifying the maximum rule length as an input parameter. \n\n\n4) How can the proposed method be improved to learn rules beyond the candidate rules?\n\nThe method can be used with any other method that learns deterministic rules to make them probabilistic. We do not claim that using AIME+ is the best. Our main interest is in seeing if/how the proper treatment of probabilities helps in the KB completion tasks using a rule-based approach."}, {"title": "Comparisons to Neural Theorem Provers explained", "comment": "5) \u2018End-to-end differentiable proving\u2019 from NeurIPS (NIPS) 2017 also tackles the same problem and it would be nice to see a comparison to that work. \n\nWe have qualitatively drawn parallels between our Statistical Relational Learning (SRL) based approach and Knowledge Graph Embedding (KGE) based approaches for the problem of KB completion in Appendix E that we added to the revision. The SRL based approach is much more interpretable and explainable as compared to the black-box KGE based approaches but also, in our opinion, as compared to Neural Theorem Provers (NTPs). KGE based approaches, including NTPs, also need the test data to get the embedding which is not required by any SRL based approaches. SRL based approaches can reason with unseen constants in the data as they learn first-order rules. KGE based approaches would require re-training in order to incorporate a lot of new constants which is not the case with our SRL based approach. Please refer to Appendix E for further details.\n\nTo compare the scalability of SafeLearner with NTPs, \u2018End-to-end differentiable proving\u2019 uses 4 deterministic KBs that are not at a large scale (Countries KB has 1158 facts, Kinship KB has 10686 facts, Nations KB has 2565 facts and UMLS KB has 6529 facts). Recently, they submitted a follow-up paper, \u2019Towards Neural Theorem Proving at Scale\u2019, where they claim to have made their technique more scalable. The follow-up paper uses the following 3 deterministic KBs: 1) WordNet18 KB with 151,442 facts, 2) WordNet18RR with 93,003 facts, and 3) Freebase FB15k-237 KB with 14,951 facts. On the other hand, SafeLearner has demonstrated that it scales to YAGO 2.4 KB of 948,000 probabilistic tuples. Although previous SRL techniques were not as scalable, SafeLearner is as scalable as the latest version of NTPs because it is the first SRL technique to use lifted inference.\n\nMoreover, in our opinion, NTPs do not actually 'learn' rules. They enumerate all possible rules up to a defined length and learn how to activate them.  Essentially, NTPs optimize theorem-proving procedure given the rules. In their follow-up paper, they focus on finding just one proof efficiently (instead of all proofs, as in the initial version) and this brings them scalability."}, {"title": "Kindly refer to the Appendix", "comment": "We would like to thank the reviewer for the comments.\n\n1) The paper identifies sparsity of Knowledge bases as a challenge but does not deal with it. How does sparsity affect the algorithm design?\n\nSince SafeLearner is a Statistical Relational Learning (SRL) approach that would learn first-order rules to predict tuples, it can even reason about new constants being included in the KB without re-training. On the other hand, since knowledge graph embedding (KGE) based approaches implicitly consider all tuples, these may discard the existence of a tuple with a new constant as the embedding for the new constant does not exist. Thus, it is easier for SRL based approaches to handle highly sparse KBs as they can handle the high number of constants since they only learn first-order rules. \n\n\n2) I am not convinced by the potential application of the methods. After generating the probabilistic rules, how can I apply them? It will be appreciated if the authors can present some examples of the use of the probabilistic rules. If it is mainly to complete probabilistic KBs, how does this probabilistic logics based approach compare against embedding based approach?\n\nWe have answered this question in Appendix E. Please have a look.  "}, {"title": "Clarifications addressed", "comment": "We would like to thank the reviewer for the comments and would like to clarify further.\n\n1) Section 5.1.1.1 : \"(ii) tuples contained in the answer of Q' where Q' is the same as Q but without the rule with an empty body, but not in the training set\" is unclear.\n\nWe have elaborated more on the 3 categories of target tuples in the revised paper. We hope it is clearer now.\n\n\n2) In the algorithm, line 22, aren't rules removed until H leads to a \"safe\" UCQ?\n\nWe have elaborated further on our algorithm in the paper. Checking for a safe UCQ is performed in Line 8 of QueryConstructor function. The only way to check for a safe query is by trying to construct a query plan, which is exactly what happens inside SlimShot. So if SlimShot fails to construct a query plan, the UCQ is considered to be unsafe.\n\n\n3) Section 6.1: \"In line 7 we formulate a UCQ Q from all the candidate rules in H (explained in 5.2 with an example)\". I was unable to find the example in section 5.2.\n\nThank you for pointing it out. It has now been rectified in the revision.\n\n\n4) It would be interesting to have an idea of the maximum scale that ProbFoil+ can handle since it seems to be the only competitor to the suggested method.\n\nWe have conducted a small experiment to demonstrate that ProbFOIL+ struggles to scale upto large KBs with a number of target tuples > 5000. On the other hand, such large KBs are handled reasonably faster by SafeLearner.  For instance, for a simple probabilistic KB with 20000 target tuples and 20000 non-target tuples, ProbFOIL+ took 15 hours and 54 minutes and SafeLearner took just 30 minutes. The detailed procedure and results of the experiment could be found in Appendix C.\n\n\n5) In section 7.2 does the \"learning time\" include the call to AMIE+?\n\nYes, the learning time includes both structure learning (including AMIE+) and parameter learning components.\n\n\n6) I found that it was sometimes hard to understand what was a part of the proposed system, and what was done in AMIE+ or Slimshot.\n\nOnly line 3 in Algorithm 1 uses AMIE+. On the other hand, SlimShot is used in lines 7 (QueryConstructor function) and 10 (ProbabilityPredictor function). Every other line of Algorithm 1 is part of SafeLearner.\n\n\n7) Explain the breaking down of queries into independent sub-queries in the algorithm.\n\nIf a query can be written as a union of independent sub-queries then its probability can be expressed as a unification of the probability of its sub-queries. We have explained this further in Appendix D.\n\n\n8) Caching is not mentioned anywhere in the algorithm.\n\nWe use caching/memoization before calling SlimShot in SafeLearner.  This is primarily done to speed up SafeLearner by storing the results of the expensive function call to SlimShot and returning the cached result when the structure of the query occurs again in the input. Since SlimShot produces a query plan, we exploit the fact that isomorphic queries naturally have the same query plans. We have explained this further in Appendix D.\n\n\n9) Some of the components which are sold as important for the speed-up in the conclusion aren't clear enough in the main text. Some numbers to experimentally back-up how important these additions are to the algorithms would be welcome.\n\nWe performed an experiment where we compared SafeLearner with and without the 2 speed-up techniques, memoization, and query disintegration. Our results on NELL (850th iteration) and YAGO show that memoization and query disintegration give an average speed-up of 50% and 7% respectively. It is important to understand that query disintegration would only give a speed-up when SafeLearner learns a high number of rules with a lot of rules being independent to one another. The detailed procedure and results of the experiment could be found in Appendix D.\n"}], "comment_replyto": ["HyxYWizrfV", "HyxYWizrfV", "HJxMMAfMGV", "r1eeNpZbzV"], "comment_url": ["https://openreview.net/forum?id=HkyI-5667&noteId=rJgHsQcWNN", "https://openreview.net/forum?id=HkyI-5667&noteId=BkgjQm5Z44", "https://openreview.net/forum?id=HkyI-5667&noteId=HJeYib5WEE", "https://openreview.net/forum?id=HkyI-5667&noteId=SkgHhsFZNE"], "meta_review_cdate": 1549697762569, "meta_review_tcdate": 1549697762569, "meta_review_tmdate": 1551128384596, "meta_review_ddate ": null, "meta_review_title": "Nice paper but concerns about related work need to be addressed", "meta_review_metareview": "The paper presents a method of learning probabilistic rules from\na probabilistic dataset of KB tuples.  They first use existing\ndeterministic rule-learning algorithm AMIE+ to get candidate\nrules and then learn probabilistic rules using lifted inference.\nThe paper is written clearly.  Authors have responded the\nreviewers' concerns well.  Overall there are some concerns that the\ncontributions of the paper are not substantial enough in quantity\nand depth.  Given the vast existing literature on the topic,\nthe authors should try to resolve the questions of comparisons\nthat naturally arise.", "meta_review_readers": ["everyone"], "meta_review_writers": [], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=HkyI-5667&noteId=SJxskQZ3V4"], "decision": "Accept (Poster)"}