File size: 17,681 Bytes
fad35ef
1
{"forum": "BJlh0x9ppQ", "submission_url": "https://openreview.net/forum?id=BJlh0x9ppQ", "submission_content": {"title": "Learning Numerical Attributes in Knowledge Bases", "authors": ["Bhushan Kotnis", "Alberto Garc\u00eda-Dur\u00e1n"], "authorids": ["bhushan.kotnis@neclab.eu", "alberto.duran@neclab.eu"], "keywords": ["numerical attribute prediction", "label propagation", "value imputation"], "TL;DR": "Prediction of numerical attribute values associated with entities in knowledge bases.", "abstract": "Knowledge bases (KB) are often represented as a collection of facts in the form (HEAD, PREDICATE, TAIL), where HEAD and TAIL are entities while PREDICATE is a binary relationship that links the two. It is a well-known fact that knowledge bases are far from complete, and hence the plethora of research on KB completion methods, specifically on link prediction. However, though frequently ignored, these repositories also contain numerical facts. Numerical facts link entities to numerical values via numerical predicates; e.g., (PARIS, LATITUDE, 48.8). Likewise, numerical facts also suffer from the incompleteness problem. To address this issue, we introduce the numerical attribute prediction problem. This problem involves a new type of query where the relationship is a numerical predicate. Consequently, and contrary to link prediction, the answer to this query is a numerical value. We argue that the numerical values associated with entities explain, to some extent, the relational structure of the knowledge base. Therefore, we leverage knowledge base embedding methods to learn representations that are useful predictors for the numerical attributes. An extensive set of experiments on benchmark versions of FREEBASE and YAGO show that our approaches largely outperform sensible baselines. We make the datasets available under a permissive BSD-3 license. ", "archival status": "Archival", "subject areas": ["Information Extraction", "Relational AI"], "pdf": "/pdf/3245347491d363c15a33c9d5af2890b86fd0b6c2.pdf", "paperhash": "kotnis|learning_numerical_attributes_in_knowledge_bases", "_bibtex": "@inproceedings{\nkotnis2019learning,\ntitle={Learning Numerical Attributes in Knowledge Bases},\nauthor={Bhushan Kotnis and Alberto Garc{\\'\\i}a-Dur{\\'a}n},\nbooktitle={Automated Knowledge Base Construction (AKBC)},\nyear={2019},\nurl={https://openreview.net/forum?id=BJlh0x9ppQ}\n}"}, "submission_cdate": 1542459603775, "submission_tcdate": 1542459603775, "submission_tmdate": 1580939655475, "submission_ddate": null, "review_id": ["rJxMHDcfGN", "BJeOWyoxGN", "B1xo8C8iZV"], "review_url": ["https://openreview.net/forum?id=BJlh0x9ppQ&noteId=rJxMHDcfGN", "https://openreview.net/forum?id=BJlh0x9ppQ&noteId=BJeOWyoxGN", "https://openreview.net/forum?id=BJlh0x9ppQ&noteId=B1xo8C8iZV"], "review_cdate": [1546983225794, 1546854143616, 1546509907302], "review_tcdate": [1546983225794, 1546854143616, 1546509907302], "review_tmdate": [1550269640042, 1550269639829, 1550269639563], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["BJlh0x9ppQ", "BJlh0x9ppQ", "BJlh0x9ppQ"], "review_content": [{"title": "Solid work, convincing experiments and results", "review": "The paper presents innovative work towards learning numerical attributes in a KB, which the authors claim to be the first of its kind. The approach leverages KB embeddings to learn feature representations for predicting missing numerical attribute values. The assumption is that data points that are close to each other in a vector (embeddings) space have the same numerical attribute value. Evaluation of the approach is on a set of highest ranked 10 numerical attributes in a QA setting with questions that require a numerical answer.\n\nThe paper is well written and the approach is explained  in full detail. \n\nMy only concern is the application of the method across numerical attributes of (very) different granularity and context. The authors briefly mention the granularity aspect in section 3 and point to a normalization step. However, this discussion is very brief and leaves perhaps some further questions open on this.", "rating": "8: Top 50% of accepted papers, clear accept", "confidence": "2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper"}, {"title": "Nice paper on prediction of numeric attributes in KBs.", "review": "The paper reports on the prediction of numerical attributes in knowledge bases, a problem that has indeed\nreceived too little attention. It lays out the problem rather clearly and defines datasets, a number of baselines\nas well as a range of embedding-based models that I like because they include both simple pipeline models \n(learn embeddings first, predict numerical attributes later) and models that include the numerical attributes\ninto embedding learning. I also appreciate that the paper makes an attempt to draw together ideas from different\nresearch directions.\n\nOverall, I like the paper. It's very solidly done and can serve as excellent base for further studies. Of course, I also have comments/criticisms.\n\nFirst, the authors state that one of the contributions of the paper is to \"introduce the problem of predicting the value \nof entities\u2019 numerical attributes in KB\". This is unfortunately not true. There is relatively old work by Davidov and\nRappoport (ACL 2010) on learning numeric attributes from the web (albeit without a specific reference to KBs), and \na more recent study specifically aimed at attributes from KBs (Gupta et al. EMNLP 2015) which proposed and modelled exactly the same task, including defining freely available FreeBase-derived datasets.\nMore generally speaking, the authors seem to be very up to date regarding approaches that learn embeddings directly\nfrom the KB, but not regarding approaches that use text-based embeddings. This is unfortunate, since the\nmodel that we defined is closely related to the LR model defined in the current paper, but no direct comparison is \npossible due to the differences in embeddings and dataset.\n\nSecond, I feel that not enough motivation is given for some of the models and their design decisions. For example,\nthe choice of linear regression seems rather questionable to me, since the assumptions of linear regression (normal distribution/homoscedasticity) are clearly violated by many KB attributes. If you predict, say, country populations, the \nerror for China and India is orders of magnitude higher then the error for other countries, and the predictions are dominated by the fit to these outliers. This not only concerns the models but also the evaluation, because\nMAE/RMSE also only make sense when you assume that the attributes scale linearly -- Gupta et al. 2015 use this\nas motivation for using logistic regression and a rank-based evaluation. \nI realize that you comment on non-linear regression on p.12 and give a normalized evaluation on p.13: \nI appreciate that, even though I think that it only addresses the problem to an extent.\n\nSimilarly, I like the label propagation idea (p. 7) but I lack the intuition why LP should work on (all) numeric attributes.\nIf, say, two countries border each other, I would expect their lat/long to be similar, but why should their (absolute) GDP be similar? What is lacking here is a somewhat more substantial discussion of the assumptions that this (and the other) \nmodels make about the structure of the knowledge graph and the semantics of the attributes.\n\nSmaller comment:\n* Top of p.5, formalization: This looks like f is a general function, but I assume that one f is supposed to be learned\n   for each attribute? Either it should be f_a, or f should have the signature E x A -> R.\n   (p4: Why is N used as a separate type for numeric attributes if the function f is then supposed to map into reals anyway?)\n\n", "rating": "8: Top 50% of accepted papers, clear accept", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "interesting paper with decent contribution, but doubts about practical viability", "review": "The paper presents a method to predict the values\nof numerical properties for entities where these properties\nare missing in the KB (e.g., population of cities or height of athletes).\nTo this end, the paper develops a suite of methods, using\nregression and learned embeddings. Some of the techniques\nresemble those used in state-of-the-art knowledge graph completion,\nbut there is novelty in adapting these techniques\nto the case of numerical properties.\nThe paper includes a comprehensive experimental evaluation\nof a variety of methods.\n\nThis is interesting work on an underexplored problem,\nand it is carried out very neatly.\n\nHowever, I am skeptical that it is practically viable.\nThe prediction errors are such that the predicted values\ncan hardly be entered into a high-quality knowledge base.\nFor example, for city population, the RMSE is in the order\nof millions, and for person height the RMSE is above 0.05\n(i.e., 5cm). So even on average, the predicted values are\nway off. Moreover, average errors are not really the decisive\npoint for KB completion and curation.\nEven if the RMSE were small, say 10K for cities, for some\ncities the predictions could still be embarrassingly off.\nSo a knowledge engineer could not trust them and would \nhardly consider them for completing the missing values.\n\nSpecific comment:\nThe embeddings of two entities may be close for different\nreasons, potentially losing too much information.\nFor example, two cities may have close embeddings because\nthey are in the same geo-region (but could have very different\npopulations) or because they have similar characteristics\n(e.g., both being huge metropolitan areas). Likewise, two\nathletes could be close because of similar origin and\nsimilar success in Olympic Games, or because they played\nin the same teams. \nIt is not clear how the proposed methods can cope with\nthese confusing signals through embeddings or other techniques.", "rating": "6: Marginally above acceptance threshold", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}], "comment_id": ["BJlFCHqEQ4", "HJlb5B9EmV", "Sylnkr9NX4"], "comment_cdate": [1548162513192, 1548162440622, 1548162276198], "comment_tcdate": [1548162513192, 1548162440622, 1548162276198], "comment_tmdate": [1548162513192, 1548162440622, 1548162276198], "comment_readers": [["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["AKBC.ws/2019/Conference/Paper12/Authors", "AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference/Paper12/Authors", "AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference/Paper12/Authors", "AKBC.ws/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Rebuttal Review#3", "comment": "Thanks for your helpful review.\n\nLet us address individually each of your concerns:\n\n1. Prediction errors: We agree that, for the data sets discussed in the paper, the performance is not up to industrial standard and thus this technique may not be useful \u201cas-is\u201d in the industry. We believe that this paper is the first step which will hopefully spur more interest in this problem, and would eventually lead to industry-ready algorithms. However, this (current) limitation is not specific to this new problem, but common to many research problems (e.g. link prediction methods, for the moment, are far from meeting industrial requirements).\n\n2. Embeddings: This is again an excellent point. Please also see rebuttal for Reviewer #2. Subsets of latent factors capture signals related to different relational and numerical information. A metric learning approach that \u201cselects\u201d a group of latent factors relevant to a certain numerical attribute would make sense. However, during our experiments, we learned Mahalanobis metrics by incorporating an additional loss during the learning of knowledge graph embeddings, and it did not improve the overall performance. We will discuss this in the paper. Nevertheless, we suggest that future work should address this research direction in greater depth."}, {"title": "Rebuttal Review#2", "comment": "Thanks a lot for your feedback. \n\nLet us address individually each of your concerns:\n\n1. Previous work: Thank you for bringing the work by Davidov and Rappoport (ACL 2010)  and Gupta et al. (EMNLP 2015) to our attention, we were not aware of this work. We will elaborate on these papers in the related work section. Regarding Gupta et al. (EMNLP 2015), the paper uses word2vec embeddings of named entities as inputs to a number of regression models. Similar to us, they aim to predict numerical attributes of knowledge base entities. Different to us, they leverage text information to do so. This is important, as in our problem we do not assume the existence of information other than the own graph structure. This relates to knowledge bases where entities\u2019 names are unknown/anonymized  (e.g. medical knowledge bases). Nevertheless, this is very interesting and nicely complements our approach of learning from KG relationships as opposed to free text. Similarly, Davidov and Rappoport leverages text and/or class label information.\n\n2. Linear Regression: As you correctly observed, different output variables have different distributions. Ideally, for attributes that do not permit a normal distribution assumption one would need to validate (i) the power transformation to be applied to the data, and (ii) the regression function. This comes at an exponential cost, as this validation should be jointly done for a number of regression models. To circumvent this issue we used a common normalization procedure for all attributes. We also trained with L1-loss (Robust regression), which is more robust (w.r.t. outliers) than OLS, for all attributes but the results were similar.\n\n3. Label Propagation: Our assumption is that if two entities relate to the rest of the world in a similar manner, then they have similar numerical attributes. Nevertheless, your observation is an excellent point.\n\nLabel Propagation uses Euclidean distance between vectors to build the k-nearest neighbor graph. We speculate that subsets of entities\u2019 latent factors could be encoding different relational and numerical information. For instance, it is possible that a few dimensions of the entity embeddings encode location information, while others encode population information and so on. For this reason, we learned Mahalanobis metrics for capturing different entity similarities. We did this while learning knowledge base embeddings by using an additional nearest neighbor loss. It did slightly improve the performance for few attributes, but overall it did not improve the results. We will include this discussion in the paper. Nevertheless, we suggest that future work should address this research direction in greater depth.\n"}, {"title": "Rebuttal Review#1", "comment": "Thanks for the positive and encouraging comments.\n\nGranularity:  You have raised a very valid concern regarding the granularity of different numerical attributes. We agree on that the normalization step is really important to this problem, as one has to jointly train a number of regression models (with shared input vector representations) where output variables have different scales. We also tried min-max scaling, however this gave worse performance in comparison to standard scaling. We will add this comment to the paper."}], "comment_replyto": ["B1xo8C8iZV", "BJeOWyoxGN", "rJxMHDcfGN"], "comment_url": ["https://openreview.net/forum?id=BJlh0x9ppQ&noteId=BJlFCHqEQ4", "https://openreview.net/forum?id=BJlh0x9ppQ&noteId=HJlb5B9EmV", "https://openreview.net/forum?id=BJlh0x9ppQ&noteId=Sylnkr9NX4"], "meta_review_cdate": 1549950602462, "meta_review_tcdate": 1549950602462, "meta_review_tmdate": 1551128225960, "meta_review_ddate ": null, "meta_review_title": "New embedding method using numerical facts sparks interest but has some obvious limitations.", "meta_review_metareview": "The authors consider the problem of predicting or imputing numerical attributes in knowledge bases. In contrast to simple local or global attribute prediction, they design a regression-based model that uses knowledge graph embeddings and extend the embedding computation to also use numerical attributes. Evaluation shows interesting preliminary results.\n\nThe critical consensus was that this paper is worthy of acceptance, but has several serious limitations that should be carefully explained. A critical issue is that the relationships used for propagating information do not necessarily correlate to similar numerical attribute values (e.g., geographical location is useful for predicting lat/long but not GDP or population), that a linear regression model is not sufficient to capture the full extent of relationships, that normalization of error values across different relationships skews the evaluation, several important and relevant related works were omitted, and the overall error rates are still somewhat high.", "meta_review_readers": ["everyone"], "meta_review_writers": [], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=BJlh0x9ppQ&noteId=HkeGcARyrE"], "decision": "Accept (Poster)"}