AMSR / conferences_raw /akbc20 /AKBC.ws_2020_Conference_3pcecaCEK-.json
mfromm's picture
Upload 3539 files
fad35ef
raw
history blame contribute delete
No virus
22.5 kB
{"forum": "3pcecaCEK-", "submission_url": "https://openreview.net/forum?id=3pcecaCEK-", "submission_content": {"keywords": ["knowledge base completion", "knowledge graph embedding", "classification", "ranking"], "TL;DR": " We propose to replace the ranking approach with an actual classification and suggest how to improve knowledge graph embedding models under the new setting.", "authorids": ["AKBC.ws/2020/Conference/Paper81/Authors"], "title": "Ranking vs. Classifying: Measuring Knowledge Base Completion Quality", "authors": ["Anonymous"], "pdf": "/pdf/d9baf6a3f7f1ddc5f5c19ab8567f6867cc13d300.pdf", "subject_areas": ["Knowledge Representation, Semantic Web and Search", "QuestionAnswering and Reasoning", "Applications"], "abstract": "Knowledge base completion (KBC) methods aim at inferring missing facts from the information present in a knowledge base (KB). In the prevailing evaluation paradigm, a model does not strictly decide about if a new fact should be accepted, but rather puts it in a relative position to other candidate facts via ranking. We argue that consideration of binary predictions is essential to reflect the actual KBC quality, and propose a novel evaluation paradigm, designed to provide more transparent model selection criteria for a realistic scenario. We construct a data set FB13k-QAQ with an alternative evaluation data structure, where single facts are transformed to entity-relation queries with a corresponding entity set of correct answers. Some randomly chosen correct answers are removed from the data set, resulting in incomplete queries or even queries with no possible answer. The latter specifically contrast the ranking setting. Obtained on the new data set, differences in relative performance of state-of-the-art KB embedding models in the ranking and classification settings confirm that ranking quality does not necessarily translate to completion quality. The results motivate the development of KB embedding models with better prediction separability, and we propose a simple variant of TransE that encourages thresholding and achieves a significant improvement in prediction F-Score relative to the original TransE.", "paperhash": "anonymous|ranking_vs_classifying_measuring_knowledge_base_completion_quality"}, "submission_cdate": 1581705816931, "submission_tcdate": 1581705816931, "submission_tmdate": 1586972219941, "submission_ddate": null, "review_id": ["ArQYRH3jdj", "x2rOzgZGDS4", "zy0t8Jy7VmB"], "review_url": ["https://openreview.net/forum?id=3pcecaCEK-&noteId=ArQYRH3jdj", "https://openreview.net/forum?id=3pcecaCEK-&noteId=x2rOzgZGDS4", "https://openreview.net/forum?id=3pcecaCEK-&noteId=zy0t8Jy7VmB"], "review_cdate": [1583766128878, 1585302737025, 1585654517309], "review_tcdate": [1583766128878, 1585302737025, 1585654517309], "review_tmdate": [1585695498476, 1585695498205, 1585695497932], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["AKBC.ws/2020/Conference/Paper81/AnonReviewer2"], ["AKBC.ws/2020/Conference/Paper81/AnonReviewer1"], ["AKBC.ws/2020/Conference/Paper81/AnonReviewer4"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["3pcecaCEK-", "3pcecaCEK-", "3pcecaCEK-"], "review_content": [{"title": "KBC evaluation paper, more experiments needed", "review": "Summary: The paper proposes a new evaluation paradigm for KB completion methods that focuses on classificiation, rather than ranking. The authors construct an alternative dataset FB13kQAQ which contains false as well as true facts. They analyse the performance of existing KBC methods (DistMult, ComplEx, etc.) on the new dataset and propose a new KBC model, which can be seen as a variant of TransE with thresholding.\n\nWhilst I agree with the general premise of developing a better way for evaluating KBC methods, I believe the paper in its current state is not ready for publication. More details below:\n\nSection 3:\n1. I'm not sure how useful type violation queries are, as those shouldn't be particularly hard to predict. Wouldn't it be better to look at ranked predictions of existing state-of-the-art models and extract incorrect queries that are ranked highly to create a hard dataset for future KBC work? \n2. Writing quality should be improved. In particular, the dataset creation process should be described in a clearer and much more succint way.\n\nSection 4:\n1. No ned to describe Algorithm 1 in such detail on almost half a page. This space could be used for additional experiments (see below).\n\nSection 5:\n1. Since BCE is used as a loss function and a logistic sigmoid is applied to every triple, this gives a natural classification threshold of 0.\n\ta) It would be interesting to see how this simple baseline threshold compares to the tuned one.\n\tb) Additionally, one could add a relation-specific bias to every scoring function that is then learned, rather than using a separate tuning strategy for the relation-specific thresholds.\n2. It would be nice to see models compared with the same number of parameters, as opposed to the same embedding dimensionality (e.g. ComplEx has 2x as many parameters as DistMult for the same embedding dimensionality).\n3. The authors don't provide an analsys of model performances on S, M, N and F queries separately.\n4. Some qualitive analysis on what kind of queries particular models struggle with would be nice to see.\n\nSection 6:\n1. The authors don't provide a clear motivation for the newly proposed model and why it should be better than the existing models. Why does A_r have to be a diagonal positive semi-definite matrix? In general, the new model does not seem like a natural add-on to a paper focused on KBC evaluation and should perhaps be separated into a different paper and replaced by more extensive experiments (see above).\n2. The proposed Region model outperforms ConvE on the F1 score and performs comparably on the MRR. Since ConvE is a relatively old model at this point, the Region model should however be compared to more recent state-of-the-art models, e.g. RotatE (Sun et al., ICLR 2019.) and TuckER (Balazevic et al., EMNLP 2019). \n3. How does the proposed model compare on the existing WN18RR and FB15k-237 datasets?\n\nOverall, the abstract should be made much more succint and the writing quality of the whole paper could be improved.", "rating": "3: Clear rejection", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}, {"title": "The authors convincingly argue that ranking-based metrics (like MRR) for entity-relation completion are misleading for many downstream tasks. ", "review": "The main contributions of the paper are a new train/test dataset that provides negative cases and queries to fulfill. Based on this dataset the authors show how to adapt a traditional IR methodology with F1, which is compared to current measurement techniques, especially MMR. Finally, the authors provide a variant of TransE that yields significant improvement in F-Score.\n\nThe paper is overall interesting, well-written and easy to follow, modulo a few specific points (see below).\n\nThe idea of introducing new metrics for entity-relation completion tasks that are closer to real-life downstream tasks is compelling and clearly conveyed. Good MRR rankings were useful for entity-relation predictions initially, but as quality rises, better metrics are necessary.\n\nThe result section shows how to apply the new metric compared to standard metrics, which is convincing.\n\n\nDetailed comments: \n\nThe exact process used to build the new dataset is unclear. To clarify it, it would be beneficial to report the numbers after every step. Also, it would be great to add an example for step 6 to better understand what \"answerable queries\" refers to exactly, as well as one example per query category (multiple answers, single answer, no answer).\n\nAlso, it would be interesting to measure how the performance evolves with the overall size of the graph (in terms of the various measurement methods). \n\nFinally, it would be good to have a more detailed discussion regarding how the different measurement metrics differ. The text mentions that there is \"almost no correlation\"; would it be possible to give substance to this claim with some numbers and to explain this further?\n\nIf space is an issue to add those various points, the authors could remove or shorten the simple variant of TransE.\n", "rating": "7: Good paper, accept", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "Interesting new evaluation, could use more analysis", "review": "Summary:\nThe authors propose a new classification-based evaluation approach for knowledge base completion models. They create new dev/test sets by adding negative examples created by a combination of filtering out entities from true facts and creating type-inconsistent entity-relation pairs. They then evaluate a handful of state-of-the-art embedding models using this new metric. Lastly, they propose a new embedding scoring function for TransE that improves performance on the classification evaluation over the original TransE.\n\nClarity:\nThe explanations of the dataset and model are clear, but could be shorter. The motivation/how it differs from prior work (like the classification evaluation used in Socher 2013) could use more explanation, or perhaps concrete examples.\n\nOriginality and Significance:\nThe evaluation metric and modification to TransE are both novel. This approach to classification seems like it would give a better indication of model quality than classifying (possibly) perturbed triples as has been done in the past. \n\nPros:\n- Useful evaluation metric for KBC\n- Methodology is clear\n\nCons:\n- Limited analysis of the evaluation. Some more discussion/inspection of why certain models perform the way the do could help show what aspects the classification captures that MRR doesn't.\n\nOther comments:\n- I would be curious to see some breakdown of how models perform on the different subsets of the evaluation data. For \n example, I would expect the embedding models would generally do better on the type-inconsistent queries compared \n to the missing entities.", "rating": "6: Marginally above acceptance threshold", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}], "comment_id": ["VgFSBRVZnof", "np2furiVSPC", "7ZmwGwi9eo", "DWaDZePAqui", "SQhsbHO4AjG"], "comment_cdate": [1587565418459, 1587043143233, 1586966802266, 1586966339516, 1586965569597], "comment_tcdate": [1587565418459, 1587043143233, 1586966802266, 1586966339516, 1586965569597], "comment_tmdate": [1587565473137, 1587043143233, 1586966814392, 1586966506750, 1586966490324], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["AKBC.ws/2020/Conference/Paper81/Authors", "AKBC.ws/2020/Conference"], ["AKBC.ws/2020/Conference/Paper81/AnonReviewer2", "AKBC.ws/2020/Conference"], ["AKBC.ws/2020/Conference/Paper81/Authors", "AKBC.ws/2020/Conference"], ["AKBC.ws/2020/Conference/Paper81/Authors", "AKBC.ws/2020/Conference"], ["AKBC.ws/2020/Conference/Paper81/Authors", "AKBC.ws/2020/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Author response ", "comment": "We thank the reviewer for sharing their concerns.\n\n*ComplEx results*\n\nWe checked again that the results for ComplEx (obtained, as for all other methods, with the framework of Dettmers et al., AAAI 2018) are, to the best of our knowledge, indeed valid.\n\nAs an indicator of general correctness, consider the performance on MRR, where the results for ComplEx fit into the model ranking reported elsewhere (e.g. Trouillon et al., ICML 2016; Dettmers et al., AAAI 2018) and are better than the results for DistMult. It might seem counter-intuitive that these models perform differently on the global threshold setting, however, it highlights the differences in per-relation scaling (that disappears for MRR and per-relation thresholding as expected).\n\nFurther investigation of tuned threshold values showed that across all models, the tuned thresholds of ComplEx models have the highest variance (i.e. they deviate most from the mean threshold value). This indicates a strong calibration mismatch among scores for different relations, which makes the global threshold setting much more challenging for some models.\n\nThereby the introduced evaluation reveals potential for further hyper-parameter tuning, regularization adjustments, etc.\n\n\n*Comparison to more recent approaches*\n\nWe agree that an evaluation on a broader set of models including TuckEr (Balazevic et al., EMNLP 2019) would provide a richer overview, but unfortunately we had to limit the model selection due to time restrictions. We would like to emphasize that our smaller model set contains models from different approach groups, which we consider sufficient for the show-case. Furthermore, RotatE results are reported to be comparable to ConvE when used with the same negative sampling scheme (Balazevic et al., EMNLP 2019).\n\nRegarding the TransE improvement, it should be considered independently from the SOTA models and focuses on a relative improvement to TransE."}, {"title": "Response after rebuttal", "comment": "I would like to thank the authors for addressing some of my concerns. Overall, I believe the paper has improved considerably compared to the initial submission. However, I still have some major concerns due to which I stand by my decision that the paper is not acceptable in its current state:\n\n1) The performance of ComplEx for global thresholds in Table 1. seems suspiciosly low compared to DistMult, given the similarity between those two models. I would expect the two models to perform similarly.\n\n2) Comparison with some more up-to-date SOTA models, such as RotatE (Sun et al., ICLR 2019.) and TuckER (Balazevic et al., EMNLP 2019) is still missing. These models have been shown to improve on TransE when comparing MRRs by a great margin. It would be important to see how the proposed improvement on TransE compares against these more recent models on both MRR and the proposed classification setting."}, {"title": "Response to AnonReviewer2's review", "comment": "We thank the reviewer for the thoughtful comments.\n\nGeneral note:\n\nDue to a recently discovered bug, the revised paper reports slightly different numbers than the original submission. The conclusions are still valid.\n\nResponses to the questions raised:\n\n*Usefulness of type violations*\n=> We added additional analysis that shows that while queries based on type violations are easier for models than queries constructed by entity removal, type violations still constitute a significant challenge to all models considered. We believe they cover an aspect that is important to measure.\n\n*Use predictions of model for test set creation*\n=> We decided to use an approach that is independent of already existing models for test set construction, since we wanted to avoid any biases in the test set that would reflect biases of specific models.\nWe believe that only relying on properties of the underlying data set makes the evaluation setting more useful in the long term.\nMoreover, given that most models still struggle to reach a performance > 20%, our primary intention was not to create a particularly *hard* evaluation setup, but one that realistically measures performance in a KB Completion (i.e., KB entry prediction) setting.\n\n*Clarify data set description*\n=> We shortened the description of the data set construction and provided an additional diagram to increase readability.\n\n*No need to describe Algorithm 1*\n=> Moved to appendix.\n\n*BCE with threshold 0.5*\n=> The global threshold experiments include runs with a threshold of 0.5.\nThis classification threshold would however only be natural if the evaluation setting mirrored the training setting, which is not the case for KBC.\nIt is therefore natural to find a threshold that is adapted to the evaluation setting of interest (finding new facts, rather than separating positive from negative training samples).\nWe included insight over the global thresholds used in Appendix B.2.\n\n*relation-specific bias*\n=> Similarly, relation specific biases would add more capacity to the model to capture positive/negative ratios of sampled facts during training.\nWhether such an additional modeling capacity is beneficial (i.e. generalizes to the training data), or leads to overfitting on the training data, is an interesting question that could be examined with the evaluation setup that we propose.\nIn the current work however, we decided to use threshold tuning instead, which should have an effect similar to including a bias in the model.\n\n*number of parameters of models*\nRather than trying to establish comparisons where choices of hyper-parameters (e.g. number of parameters) are exactly the same between models, we performed a limited hyper-parameter search (across values reported as standard for this task) and compared the results.\nThe point of the comparison is to highlight that the architectures behave differently in the new evaluation setting.\n\n*Analysis of different query types*\n=> We included a detailed analysis for different query types.\n\n*Qualitative analysis*\n=> We included a qualitative analysis showing actual queries and answers found by different models.\n\n*Newly proposed model*\nThe motivation for the newly proposed model is that A_r explicitly describes a decision boundary for positive/negative by an ellipse (which requires positive semi-definiteness).\nOtherwise, the underlying mechanism is the same translational model as in TransE.\nOur intention was to show that a simple additional component (modeling a decision boundary by an ellipse modulo a threshold) can improve performance in a classification scenario.\nWe agree that the newly proposed model should not be viewed as the main contribution of the paper, and we shortened Section 6.\n\n*Shorten abstract*\nWe shortened the abstract to be more succinct."}, {"title": "Response to AnonReviewer1's review", "comment": "We thank the reviewer for the thoughtful comments.\n\nGeneral note:\n\nDue to a recently discovered bug, the revised paper reports slightly different numbers than the original submission. The conclusions are still valid.\n\nResponses to the questions raised:\n\n*Clarify data set description*\n=> We visualized description of the data set construction in a corresponding diagram to increase readability.\n\n*Report numbers for query sets.*\n=> We added more detailed numbers/set sizes for central steps of the data set creation.\nIn particular, we added the number of entities selected for removal, and the sizes for the sets C, I, and F (see appendix A.2).\n\n*Examples, examples for answerable queries*\n=> We show example queries in Section 5. \nFor answerable queries answers are shown in bold.\n\n*measure performance w.r.t. size of the graph*\n=> While it would be a possible evaluation aspect, within the scope of our paper we tried to focus on the obtained split and did not experiment with different graph sizes.\n\n*more detailed discussion*\n=> We expanded the analysis and discussion.\n(We reformulated the statement \"almost no correlation\". What we wanted to express is simply that the rankings of methods by metric in Table 1 are not the same for different metrics.)\n\n*shorten the simple variant of TransE*\nWe shortened Section 6."}, {"title": "Response to AnonReviewer4's review", "comment": "We thank the reviewer for the thoughtful comments.\n\nGeneral note:\n\nDue to a recently discovered bug, the revised paper reports slightly different numbers than the original submission. The conclusions are still valid.\n\nResponses to the questions raised:\n\n*Shortening data set description*\n=> We provided a simpler description of the data set construction with a corresponding diagram to increase readability, which may, however, include some ambiguities. We believe that details are important here (e.g., what happens if an answer to a query in the test set was originally in the train split?), and that a rigorous way of describing the process is the only way to avoid misunderstandings and gaps. Therefore a detailed description is still available in the Appendix A.1.\n\n*Motivation w.r.t. prior work*\n=> We expanded the corresponding part of Section 2, related work.\n\n*More analysis of evaluation / evaluation on subsets*\n=> We added additional quantitative analysis by splitting the test queries into those based on type violations, and those based on entity removal. We also added more qualitative analysis by inspecting concrete queries.\n"}], "comment_replyto": ["np2furiVSPC", "7ZmwGwi9eo", "ArQYRH3jdj", "x2rOzgZGDS4", "zy0t8Jy7VmB"], "comment_url": ["https://openreview.net/forum?id=3pcecaCEK-&noteId=VgFSBRVZnof", "https://openreview.net/forum?id=3pcecaCEK-&noteId=np2furiVSPC", "https://openreview.net/forum?id=3pcecaCEK-&noteId=7ZmwGwi9eo", "https://openreview.net/forum?id=3pcecaCEK-&noteId=DWaDZePAqui", "https://openreview.net/forum?id=3pcecaCEK-&noteId=SQhsbHO4AjG"], "meta_review_cdate": 1588282274679, "meta_review_tcdate": 1588282274679, "meta_review_tmdate": 1588341533922, "meta_review_ddate ": null, "meta_review_title": "Paper Decision", "meta_review_metareview": "This paper proposes a new evaluation for KBC where models need to decide whether to accept a new fact instead of simply ranking the possibilities. The main contribution of this work is the well-motivated evaluation that is better aligned with how these models would in practice be used downstream. There is a secondary contribution of a variant of TransE that is tailored towards the more realistic setting reflected by the evaluation. While there are concerns about the lack of more recent models, the novel method serves to highlight the goal of the new evaluation rather than to claim state-of-the-art performance.\n", "meta_review_readers": ["everyone"], "meta_review_writers": ["AKBC.ws/2020/Conference/Program_Chairs"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=3pcecaCEK-&noteId=BNW66Tp5WGb"], "decision": "Accept"}