AMSR / conferences_raw /akbc20 /AKBC.ws_2020_Conference_pSLmyZKaS.json
mfromm's picture
Upload 3539 files
fad35ef
raw
history blame contribute delete
No virus
25.8 kB
{"forum": "pSLmyZKaS", "submission_url": "https://openreview.net/forum?id=pSLmyZKaS", "submission_content": {"keywords": ["information retrieval", "knowledge bases", "ranking", "negation"], "TL;DR": "Knowledge bases so far only contain positive information. We argue for the importance of negative information, and present two methods to mine it.", "authorids": ["AKBC.ws/2020/Conference/Paper4/Authors"], "title": "Enriching Knowledge Bases with Interesting Negative Statements", "authors": ["Anonymous"], "pdf": "/pdf/2eca04de3630ff5d6535d4d90df31f2890302ca1.pdf", "subject_areas": ["Knowledge Representation, Semantic Web and Search", "Information Extraction"], "abstract": "Knowledge bases (KBs), pragmatic collections of knowledge about notable entities, are an important asset in applications such as search, question answering and dialogue. Rooted in a long tradition in knowledge representation, all popular KBs only store positive information, but abstain from taking any stance towards statements not contained in them.\n\nIn this paper, we make the case for explicitly stating interesting statements which are not true. Negative statements would be important to overcome current limitations of question answering, yet due to their potential abundance, any effort towards compiling them needs a tight coupling with ranking. We introduce two approaches towards automatically compiling negative statements. (i) In peer-based statistical inferences, we compare entities with highly related entities in order to derive potential negative statements, which we then rank using supervised and unsupervised features. (ii) In pattern-based query log extraction, we use a pattern-based approach for harvesting search engine query logs. Experimental results show that both approaches hold promising and complementary potential. Along with this paper, we publish the first datasets on interesting negative information, containing over 1.4M statements for 130K popular Wikidata entities.", "paperhash": "anonymous|enriching_knowledge_bases_with_interesting_negative_statements", "archival_status": "Archival"}, "submission_cdate": 1581705783981, "submission_tcdate": 1581705783981, "submission_tmdate": 1588633832445, "submission_ddate": null, "review_id": ["GMbshhHGlKP", "1kMioFPp60", "dGV7JW_t2va", "UccemFisKmn"], "review_url": ["https://openreview.net/forum?id=pSLmyZKaS&noteId=GMbshhHGlKP", "https://openreview.net/forum?id=pSLmyZKaS&noteId=1kMioFPp60", "https://openreview.net/forum?id=pSLmyZKaS&noteId=dGV7JW_t2va", "https://openreview.net/forum?id=pSLmyZKaS&noteId=UccemFisKmn"], "review_cdate": [1587106616170, 1584295409687, 1585347332066, 1585369892362], "review_tcdate": [1587106616170, 1584295409687, 1585347332066, 1585369892362], "review_tmdate": [1587106616170, 1585695570859, 1585695570536, 1585695570262], "review_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"]], "review_writers": [["AKBC.ws/2020/Conference/Paper4/AnonReviewer3"], ["AKBC.ws/2020/Conference/Paper4/AnonReviewer3"], ["AKBC.ws/2020/Conference/Paper4/AnonReviewer1"], ["AKBC.ws/2020/Conference/Paper4/AnonReviewer2"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["pSLmyZKaS", "pSLmyZKaS", "pSLmyZKaS", "pSLmyZKaS"], "review_content": [{"title": "Great work", "review": "I am posting here my review from before the revision of the paper by the authors. All my concerns have been addressed in that revision.\n\n\n\nThis paper studies how interesting negative statements can be identified for knowledge bases (KBs). The main contributions are several ideas of how to generate negative statements, and several heuristics to rank them. \n\nThis paper sets foot into a much-needed domain of research. Negative statements are a very important issue for today's KBs, and the paper does not just formalize the problem and propose means to generate and rank such statements, but also provides user studies. The video of the demo is particularly impressive!\n\nI have three main issues with this submission: First, the methods seem to be geared exclusively to famous entities, and more specifically to famous humans. The peer-ranking works great for \"Which actors won the academy award\", but might work much less well on \"Which villages do not have a mayor\". The Google auto-completion, likewise, works great for \"Which football players won the Ballon d'Or\", but it is less clear how it works for \"Which classical musical pieces are not written in B flat major\" (assuming that there is such a Wikidata relation). Thus, the paper should more accurately be called \"enriching KBs with interesting negative statements about famous humans\".\n\nThe second issue is more fundamental: If I understand correctly, the peer-ranking method makes the closed world assumption. It computes the attributes that peers of the target entity have, and that the target entity itself does not have. These are proposed as negative statements. However, these statements are not necessarily false -- they may just be missing from the KB. The evaluation of the method ignores that problem: It asks users to rate the negative statements based on interestingness -- but does not give the user the option to say \"This statement is actually not false, it is true\". In this way, the proposed method ignores the main problem: that of distinguishing missing information from wrong information. That is surprising, because the paper explicitly mentions that problem on page 9, complaining that the Wikidata SPARQL approach has no formal foundation due to the open world assumption. It is just the same with the first proposed approach. This is what the paper itself states on Page 10: \u201cTextual evidence is generally a stronger signal that the negative statement is truly negative\u201d -- implying that the first proposed method does not always produce correct negative statements. However, the main part of the paper does not acknowledge this or evaluate whether the produced negative information is actually negative. \n\nOnly in the appendix (which is optional material), we find that the peer-based method has a \"71% out-of-sample precision\", which, if it were the required value, should be discussed in the main paper. The same goes for the value of 60% for the query-log-based method. \n\nThe third issue is the evaluation: The proposed methods should be explicitly evaluated wrt. their correctness (i.e., whether they correctly identify wrong statements) in the main part of the paper. Then, they should be compared to the baseline, which is the partial completeness assumption. This is currently not done.\n\nThe next question is how to rank the negative statements. The baseline here should be the variant of the partial completeness assumption that is used in RUDIK [Ortona et al, 2018]: It limits the partial completeness assumption to those pairs of entities that are connected in the KB. It says: \"If r(x,y) and r'(x,z) and not r(x,z), then r(x,z) is an interesting negative statement\". The proposed method should be compared to this competitor.\n\nThus, while the paper opens a very important domain of research, I have the impression that it oversells its contribution: by ignoring the question of missing vs. wrong statements, by not comparing to competitors, and by focusing its methods exclusively on famous humans. \n\nRelated work: \n- The \"universally negated statements\" are the opposite of the \"obligatory attributes\" investigated in [2], which thus appears relevant. \n- The work of [1] solves a similar problem to the submission: by predicting that a subject s has no more objects for a relation r than those of the KB, it predicts that all other s-r-o triples must be false.\n- It appears that the peer-ranked method is a cousin of the partial completeness assumption of AMIE, and of the popularity heuristics used in [1]. It says: \"If the KB creators took the care to annotate all your peers with this attribute, and if you had that attribute, they would for sure have annotated you as well. Since they did not, you do not have this attribute.\" This is a valid and very interesting method to generate negative statements, but it would have to be stated explicitly and evaluated for correctness.\n\nMinor: \n- It would be great to know the weights of the scores in Definition 2 also in the main paper.\n- Talking of \"text extraction\" in Section 6 is a bit misleading, because it sounds as if the data was extracted from full natural language text, whereas it is actually extracted from query logs. \n- It would be good to clarify how the Booking.com-examples were generated (manually or with the proposed method).\n\n[1] Predicting Completeness in Knowledge Bases, WSDM 2017\n[2] Are All People Married? Determining Obligatory Attributes in Knowledge Bases, WWW 2018", "rating": "9: Top 15% of accepted papers, strong accept", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}, {"title": "Very important problem, but I am not sure the paper addresses it", "review": "This paper studies how interesting negative statements can be identified for knowledge bases (KBs). The main contributions are several ideas of how to generate negative statements, and several heuristics to rank them. \n\nThis paper sets foot into a much-needed domain of research. Negative statements are a very important issue for today's KBs, and the paper does not just formalize the problem and propose means to generate and rank such statements, but also provides user studies. The video of the demo is particularly impressive!\n\nI have three main issues with this submission: First, the methods seem to be geared exclusively to famous entities, and more specifically to famous humans. The peer-ranking works great for \"Which actors won the academy award\", but might work much less well on \"Which villages do not have a mayor\". The Google auto-completion, likewise, works great for \"Which football players won the Ballon d'Or\", but it is less clear how it works for \"Which classical musical pieces are not written in B flat major\" (assuming that there is such a Wikidata relation). Thus, the paper should more accurately be called \"enriching KBs with interesting negative statements about famous humans\".\n\nThe second issue is more fundamental: If I understand correctly, the peer-ranking method makes the closed world assumption. It computes the attributes that peers of the target entity have, and that the target entity itself does not have. These are proposed as negative statements. However, these statements are not necessarily false -- they may just be missing from the KB. The evaluation of the method ignores that problem: It asks users to rate the negative statements based on interestingness -- but does not give the user the option to say \"This statement is actually not false, it is true\". In this way, the proposed method ignores the main problem: that of distinguishing missing information from wrong information. That is surprising, because the paper explicitly mentions that problem on page 9, complaining that the Wikidata SPARQL approach has no formal foundation due to the open world assumption. It is just the same with the first proposed approach. This is what the paper itself states on Page 10: \u201cTextual evidence is generally a stronger signal that the negative statement is truly negative\u201d -- implying that the first proposed method does not always produce correct negative statements. However, the main part of the paper does not acknowledge this or evaluate whether the produced negative information is actually negative. \n\nOnly in the appendix (which is optional material), we find that the peer-based method has a \"71% out-of-sample precision\", which, if it were the required value, should be discussed in the main paper. The same goes for the value of 60% for the query-log-based method. \n\nThe third issue is the evaluation: The proposed methods should be explicitly evaluated wrt. their correctness (i.e., whether they correctly identify wrong statements) in the main part of the paper. Then, they should be compared to the baseline, which is the partial completeness assumption. This is currently not done.\n\nThe next question is how to rank the negative statements. The baseline here should be the variant of the partial completeness assumption that is used in RUDIK [Ortona et al, 2018]: It limits the partial completeness assumption to those pairs of entities that are connected in the KB. It says: \"If r(x,y) and r'(x,z) and not r(x,z), then r(x,z) is an interesting negative statement\". The proposed method should be compared to this competitor.\n\nThus, while the paper opens a very important domain of research, I have the impression that it oversells its contribution: by ignoring the question of missing vs. wrong statements, by not comparing to competitors, and by focusing its methods exclusively on famous humans. \n\nRelated work: \n- The \"universally negated statements\" are the opposite of the \"obligatory attributes\" investigated in [2], which thus appears relevant. \n- The work of [1] solves a similar problem to the submission: by predicting that a subject s has no more objects for a relation r than those of the KB, it predicts that all other s-r-o triples must be false.\n- It appears that the peer-ranked method is a cousin of the partial completeness assumption of AMIE, and of the popularity heuristics used in [1]. It says: \"If the KB creators took the care to annotate all your peers with this attribute, and if you had that attribute, they would for sure have annotated you as well. Since they did not, you do not have this attribute.\" This is a valid and very interesting method to generate negative statements, but it would have to be stated explicitly and evaluated for correctness.\n\nMinor: \n- It would be great to know the weights of the scores in Definition 2 also in the main paper.\n- Talking of \"text extraction\" in Section 6 is a bit misleading, because it sounds as if the data was extracted from full natural language text, whereas it is actually extracted from query logs. \n- It would be good to clarify how the Booking.com-examples were generated (manually or with the proposed method).\n\n[1] Predicting Completeness in Knowledge Bases, WSDM 2017\n[2] Are All People Married? Determining Obligatory Attributes in Knowledge Bases, WWW 2018", "rating": "5: Marginally below acceptance threshold", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}, {"title": "Official Blind Review #1", "review": "The paper addresses the problem of negative statements in knowledgebase. They formalize the types of negative statements: (a) grounded statement: [s,p,o] does not exist in KB, (b) not exist [s, p, o] (there's no object that satisfy s,p]. To find negative statements, they proposes two methods (a) peer-based candidate retrieval (i.e., heuristic of finding relation that is frequently populated in nearby entities but missing in the target entity) and (b) Using search logs with meta patterns (i.e., search query logs for pattern such as \"Why XXX not\", and find retrieved queries such as \"Why XXX never won the Oscar). \n\nI agree with the motivation behind this work \u2014 studying negative statement is a problem worth pursuing, especially to build a high precision QA system that does not hallucinate. Having said that, I have multiple concerns with the current version of the paper.\n\n(1) Evaluation is not rigorous. Both extrinsic evaluation (entity summarization, question answering) is very small scale. Both evaluation only on five examples. I would rather preferred the paper to focus on one evaluation, but do the study much more carefully and in larger scale, reporting statistical significant and so forth. \n\n(2) The notion of \"Interesting\" is very subjective. The paper does not even try to define what counts as an \"interesting\" negative statement. Is it highly likely fact that is not true? Does it mean that it is surprising and unknown? \n\n(3) In section 5, What is nDCG? I don't think it is defined, and I don't know what Table 5 is talking about. \n\n(4) The paper releases the negative statement datasets. I think this could be very valuable to the community if it is released with manual annotations, even for a small subset (2-3K examples). As is, this is model prediction that we don't have a good sense of accuracy, so not very useful. \n\nMinor point:\n\n- In section 5.2, it talks about randomly sampling 100 popular humans. What's the definition of \"popular\"? In the sentence afterwards, it talks about \"expressible\" in Wikidata. Does it mean it involves predicate can be mapped to Wikidata by string matching? by manual matching?\n- In Section 4, what's the difference between \"popularity\" and \"frequency\"?\n- It would be interesting to see the actual values for hyperparameters for Definition 2. What do you mean by \"withheld training data\"?", "rating": "3: Clear rejection", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}, {"title": "Well-motivated work in a direction worth exploring", "review": "This paper studies constructing interesting negative statements about entities to enrich knowledge bases. The authors propose two main approaches and evaluate using both crowdsourcing and extrinsic evaluations on entity summarization and question answering.\n\nPros:\n- I really like the idea of adding negative statements and the paper provides good motivations for why these are necessary for different domains and downstream tasks. I think this work lays a good starting point for a line of follow-up studies.\n- The authors explore two approaches from the two main regimes to generate interesting negative statements about entities utilizing some heuristics and do experiments to show their respective weaknesses and strengths.\n- Dataset collected is of large scale and can be potentially be used for learning tasks on interesting negative statements.\n\nCons:\n- The extrinsic evaluations seem a bit synthetic and small-scale. It would be interesting to see how actually enriching a KB using these negative statements could help, for example, solving a large open-domain QA dataset.\n- More complicated baselines could be included such as recent transformer-based language models on the open-domain QA evaluation.\n\nIn summary, I think this paper is well-written, well-motivated, and lays a good starting point for an important omitted direction in KB-related research.", "rating": "7: Good paper, accept", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}], "comment_id": ["H7AllCSHlwH", "Y8FUg6h8_sh", "f7o5PiJKvfd"], "comment_cdate": [1586447151258, 1586447104801, 1586447008989], "comment_tcdate": [1586447151258, 1586447104801, 1586447008989], "comment_tmdate": [1586525688078, 1586525089397, 1586524316535], "comment_readers": [["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["AKBC.ws/2020/Conference/Paper4/Authors", "AKBC.ws/2020/Conference"], ["AKBC.ws/2020/Conference/Paper4/Authors", "AKBC.ws/2020/Conference"], ["AKBC.ws/2020/Conference/Paper4/Authors", "AKBC.ws/2020/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Thank you for the comments. We have revised parts of the paper.", "comment": "Thanks for the comments. We have revised parts of the paper, with changes marked in blue. \n\nEntity scope: \nWe have expanded the extrinsic evaluation in Section 6.4, for the task of entity summarization, from 5 entities of type human to 100 diverse entities (40 humans, 30 organizations, 30 artworks). Examples are shown in Table 3.\nAs for prominent vs. long-tail entities, we added a discussion in Section 7 (Experimental Evaluation). \n\nRole of CWA and PCA:\nWe do indeed rely on CWA among groups of notable peers. We clarified this in the paper.\nPCA is a reasonable reference point, too. Therefore, we added an experimental comparison of the precision for CWA and PCA in Section 6.2. PCA produces results with significantly higher accuracy than CWA. Thanks for making this suggestion in the review.\n\nWe emphasize that the main contribution of our method lies in identifying *interesting* negative statements; we are not just adding to the vast literature on truth prediction. We revised the Design Space section (Section 2) by better clarifying that predicting correct negative statements alone misses out on the key point.\n\nComparison with PCA:\nWe added a post-hoc evaluation of filtering by PCA (Section 6.2).\n\nPositioning against RuDiK:\nWe added a new section on Related Work (Section 3) where we discuss RuDiK and related approaches.\n\nMore different types of entities:\nThe web interface can now handle 10 types (including organizations and artworks). The demo video shows examples: https://bit.ly/39Cn2ES. \n\nMinor comments:\nWe added a discussion of references [1] and [2] suggested by the reviewer in a new section on related work (Section 3).\nWe move the discussion of weights for the ensemble ranking into the main body in Section 6.1.\nWe clarified in Section 7 that the Booking.com examples were generated with the proposed peering method, using 50 manually chosen comparable hotels as reference.\n"}, {"title": "Thank you for your helpful comments. We have revised parts of the paper.", "comment": "Thanks for the comments. We have revised parts of the paper, with changes marked in blue. \n\nMost importantly, we have expanded the extrinsic evaluation in Section 6.4, for the task of entity summarization, from 5 entities of type human to 100 diverse entities (40 humans, 30 organizations, 30 artworks). Examples are shown in Table 3.\n\nNotion of interestingness:\nWe agree that considering specific tasks would make the notion of interestingness more tangible. As KBs like Wikidata etc. are not built for one specific task, though, our annotation guidelines pursued a middle stance and grounded interestingness in an abstract task: \u201cImagine you are writing a biography about X. Is this statement interesting enough to be added to the biography?\u201d. For the hotels (Section 7) it is easier to phrase a task like booking decisions.\n\nDefinition of NDCG:\nWe added the definition of the NDCG ranking metric on page 7. We clarified that Table 5 (now Table 7) exemplifies the peer-based inference with a toy example.\n\nCorrectness of negative statements:\nWe asked the crowd to manually annotate a subset of 1000 statements for correctness, discussed in Section 6.2, with data accessible at https://bit.ly/2V0eaUo .\n\nMinor comments:\nWe ground \u201cpopularity\u201d in the number of views of the Wikipedia page of a given entity. An entity is considered popular if its number of views are greater than or equal to the average number of views of entities of the same type. Frequency of a property/predicate is defined as the number of statements in the KB including this property. \nWe moved the discussion of weights for the ensemble ranking into the main body in Section 6.1. \u201cWithheld training data\u201d was corrected to \u201cdata withheld from training\u201d, that is, we properly split the data into train/validation/test.\nIn Section 6.3: we now clarify in Appendex C.2 that \u201cpopular humans\u201d means humans with Wikipedia page views higher than the average page views for entities of type human. We also clarify in Appendix C.2 that the mapping of Wikidata expressible properties is done manually for the 30 most frequent properties in our result set. \n"}, {"title": "Thank you for your comments, we have revised parts of the paper.", "comment": "Thanks for the comments. We have revised parts of the paper, with changes marked in blue. Most importantly, we have expanded the extrinsic evaluation in Section 6.4, for the task of entity summarization, from 5 entities of type human to 100 diverse entities (40 humans, 30 organizations, 30 artworks). Examples are shown in Table 3."}], "comment_replyto": ["1kMioFPp60", "dGV7JW_t2va", "UccemFisKmn"], "comment_url": ["https://openreview.net/forum?id=pSLmyZKaS&noteId=H7AllCSHlwH", "https://openreview.net/forum?id=pSLmyZKaS&noteId=Y8FUg6h8_sh", "https://openreview.net/forum?id=pSLmyZKaS&noteId=f7o5PiJKvfd"], "meta_review_cdate": 1588300905359, "meta_review_tcdate": 1588300905359, "meta_review_tmdate": 1588341534766, "meta_review_ddate ": null, "meta_review_title": "Paper Decision", "meta_review_metareview": "This paper explores a new direction in knowledge base construction: how to identify *interesting* negative statements for KBs. Towards this general goal, two approaches have been developed: peer-based statistical inference and pattern-based text extraction. Two datasets of negative knowledge bases are provided, along with an extrinsic QA evaluation.\n\nThere has been quite a bit of discrepancy among the reviews. All the reviewers appreciated that this paper addresses a very important (and previously underestimated) problem but there are lots of discussion around the evaluation: (1) whether the current evaluation is too small-scale/non-rigorous, (2) whether the closed-world assumption is reasonable or not, (3) the correctness of evaluation of extracted KBs. \n\nThe authors have made substantial revisions during the rebuttal phase and we believe most of these issues have been addressed. Therefore, we recommend accepting this paper.", "meta_review_readers": ["everyone"], "meta_review_writers": ["AKBC.ws/2020/Conference/Program_Chairs"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=pSLmyZKaS&noteId=jVf9xK4f_c"], "decision": "Accept"}