File size: 15,630 Bytes
fad35ef
1
{"forum": "HJxeGb5pTm", "submission_url": "https://openreview.net/forum?id=HJxeGb5pTm", "submission_content": {"title": "OPIEC: An Open Information Extraction Corpus", "authors": ["Kiril Gashteovski", "Sebastian Wanner", "Sven Hertling", "Samuel Broscheit", "Rainer Gemulla"], "authorids": ["k.gashteovski@uni-mannheim.de", "sebastian.wanner@mail.uni-mannheim.de", "sven@informatik.uni-mannheim.de", "broscheit@informatik.uni-mannheim.de", "rgemulla@uni-mannheim.de"], "keywords": ["open information extraction", "text analytics"], "TL;DR": "An Open Information Extraction Corpus and its in-depth analysis", "abstract": "Open information extraction (OIE) systems extract relations and their\n  arguments from natural language text in an unsupervised manner. The resulting\n  extractions are a valuable resource for downstream tasks such as knowledge\n  base construction, open question answering, or event schema induction. In this\n  paper, we release, describe, and analyze an OIE corpus called OPIEC, which was\n  extracted from the text of English Wikipedia. OPIEC complements the available\n  OIE resources: It is the largest OIE corpus publicly available to date (over\n  340M triples) and contains valuable metadata such as provenance information,\n  confidence scores, linguistic annotations, and semantic annotations including\n  spatial and temporal information. We analyze the OPIEC corpus by comparing its\n  content with knowledge bases such as DBpedia or YAGO, which are also based on\n  Wikipedia. We found that most of the facts between entities present in OPIEC\n  cannot be found in DBpedia and/or YAGO, that OIE facts \n  often differ in the level of specificity compared to knowledge base facts, and\n  that OIE open relations are generally highly polysemous. We believe that the\n  OPIEC corpus is a valuable resource for future research on automated knowledge\n  base construction.", "archival status": "Archival", "subject areas": ["Information Extraction", "Applications: Other"], "pdf": "/pdf/1c66d07da040d22f633d5876a09eeac2f12a75c9.pdf", "paperhash": "gashteovski|opiec_an_open_information_extraction_corpus", "html": "https://www.uni-mannheim.de/dws/research/resources/opiec/", "_bibtex": "@inproceedings{\ngashteovski2019opiec,\ntitle={{\\{}OPIEC{\\}}: An Open Information Extraction Corpus},\nauthor={Kiril Gashteovski and Sebastian Wanner and Sven Hertling and Samuel Broscheit and Rainer Gemulla},\nbooktitle={Automated Knowledge Base Construction (AKBC)},\nyear={2019},\nurl={https://openreview.net/forum?id=HJxeGb5pTm}\n}"}, "submission_cdate": 1542459656480, "submission_tcdate": 1542459656480, "submission_tmdate": 1580939651559, "submission_ddate": null, "review_id": ["B1eTN16XfN", "Hkxg6IKqgV", "rylqOh2HgE"], "review_url": ["https://openreview.net/forum?id=HJxeGb5pTm&noteId=B1eTN16XfN", "https://openreview.net/forum?id=HJxeGb5pTm&noteId=Hkxg6IKqgV", "https://openreview.net/forum?id=HJxeGb5pTm&noteId=rylqOh2HgE"], "review_cdate": [1547058996860, 1545406136128, 1545092210143], "review_tcdate": [1547058996860, 1545406136128, 1545092210143], "review_tmdate": [1550269650878, 1550269650665, 1550269626226], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["HJxeGb5pTm", "HJxeGb5pTm", "HJxeGb5pTm"], "review_content": [{"title": "OPIEC: An Open Information Extraction Corpus", "review": "In this paper, the authors build a new corpus for information extraction which is larger comparing to the prior public corpora and contains information not existing in current corpora. The dataset can be useful in other applications. The paper is well written and easy to follow. It also provides details about the corpus. However, there are some questions for the authors: 1) It uses the NLP pipeline and the MinIE-SpaTe system. When you get the results, do you evaluate to what extent that the results are correct? 2) In Section 3.4, the author mentioned the correctness is around 65%, what do you do for those incorrect tuples? 3) Have you tried any task-based evaluation on your dataset?", "rating": "7: Good paper, accept", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}, {"title": "Good paper on producing a triple store from Wikipedia articles.", "review": "This paper presents a dataset of open-IE triples that were collected from Wikipedia with the help of a recent extraction system. \n\nThis venue seems like an ideal fit for this paper and I think it would make a good addition to the conference program. While there is little technical originality, the overall execution of the experimental part is quite good and I like that the authors focused in their report on describing how they filtered the output of the employed IE system and that they present interesting examples from the conducted filtering steps.\n\nI particularly liked the section on comparing the new resource to the existing knowledge bases from the same source (YAGO, DBpedia), I think it makes a lot of sense to pick resources that leverage other parts of Wikipedia (category system, ...) and not the main article text, and to look into how coverage of the these different approaches relates.\n\nIt would have been nice to also compare against other datasets/triple stores/... that used open-IE to extract from Wikipedia. A couple of references discussing such are listed in the paper, e.g., DefIE or KB-Unify seem like good candidates.\n", "rating": "7: Good paper, accept", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "Interesting analysis, but may not be enough content", "review": "The paper describes the creation of OPIEC -- an Open IE corpus over English Wikipedia.\nThe corpus is created in a completely automatic manner, by running an off-the-shelf OIE system (MinIE), which yields 341M SVO tuples. Following, this resource is automatically filtered to identify triples over named entities (using an automatic NER system, yielding a corpus of 104M tuples), and only entities which match entries in Wikipedia (5.8M tuples).\n\nOn the positive side, I think that resources for Open IE are useful, and can help spur more research and analyses.\n\nOn the other hand, however, I worry that OPIEC may be too skewed towards the predictions of a specific OIE system, and that the work presented here consists mainly of running off-the-shelf can be extended to contain more novel substance, such as a new Open IE system and its evaluation against this corpus, or more dedicated manual analysis. For example, I believe that most similar resources (e.g., ReVerb tuples) were created as a part of a larger research effort.\n\nThe crux of the matter here I think, is the accuracy of the dataset, reported tersely in Section 5.3, in which a manual analysis (who annotated? what were their guidelines? what was their agreement?) finds that the dataset is estimated to have 60% correct tuples. Can this be improved? Somehow automatically verified?\n\nDetailed comments:\n\n- I think that the paper should make it clear in the title or at least in the abstract that the corpus is created automatically by running an OIE system on a large scale. From current title and abstract I was wrongfully expecting a gold human-annotated dataset.\n\n- Following on previous points, I think the paper misses a discussion on gold vs. predicted datasets for OIE, and their different uses. Some missing gold OIE references:\nWu and Weld (2010),  Akbik and Loser (2012), Stanovsky and Dagan (2016).\n\n- Following this line, I don't think I agree with the claim in Section 4.3 that \"it is the largest corpus with golden annotations to date\". As far as I understand, the presented corpus is created in a completely automated manner and bound to contain prediction errors.\n\n- I think that some of the implementation decisions seem sometimes a little arbitrary. For instance, for the post-processing example which modifies \n(Peter Brooke; was a member of; Parliament) to (Peter Brooke; was ; a member of Parliament), I think I would've preferred the original relation, imagining a scenario where you look for all members of parliament (X; was a member of; Parliament), or all of the things Peter Brooke was a member of (Peter Brooke; was a member of; Y) seems more convenient to me.\n\nMinor comments & typos:\n\n- I assume that in Table 1, unique relations and arguments are also in millions? I think this could be clearer, if that's the indeed the case.\n\n- I think it'd be nice to add dataset sizes to each of the OPIEC variants in Fig 1.\n\n- End of Section 3.1 \"To avoid loosing this relationship\" -> \"losing this relationship\"\n\n- Top of P. 6: \"what follows, we [describe] briefly discuss these\"\n\n- Section 4.5 (bottom of p. 9) \"NET type\" -> \"NER type\"?", "rating": "6: Marginally above acceptance threshold", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}], "comment_id": ["ryxiqG0gV4", "S1lPJM0xVN", "B1lrmbClEE"], "comment_cdate": [1548964499332, 1548964318999, 1548964124855], "comment_tcdate": [1548964499332, 1548964318999, 1548964124855], "comment_tmdate": [1548964635527, 1548964318999, 1548964124855], "comment_readers": [["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["AKBC.ws/2019/Conference/Paper32/Authors", "AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference/Paper32/Authors", "AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference/Paper32/Authors", "AKBC.ws/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Clarifications", "comment": "\"I worry that OPIEC may be too skewed towards the predictions of a specific OIE system\"\n\n-> Yes, OPIEC is based on a modified variant of MinIE, which allows us to provide syntactic annotations, semantic annotations, and confidence scores. The pipeline used to create OPIEC can be applied to other datasets and (perhaps with minor modifications) with other OIE systems as well. We plan to publish the source code of this pipeline along with the corpus and data access tools along with OPIEC.\n\n\n\"the work presented here consists mainly of running off-the-shelf can be extended to contain more novel substance, such as a new Open IE system and its evaluation against this corpus\". \n\n-> We use an improved version of MinIE, which adds space-time awareness + confidence score. MinIE, and in particular the OPIEC corpus, is indeed being used in other research projects already. The goal of this paper is to introduce the dataset to other researchers and provide insight into its properties.\n\n\n\"The crux of the matter here I think, is the accuracy of the dataset, reported tersely in Section 5.3, in which a manual analysis (who annotated? what were their guidelines? what was their agreement?) finds that the dataset is estimated to have 60% correct tuples. Can this be improved? Somehow automatically verified?\"\n\n-> We added subsection 4.6 to provide more information about the confidence scores that OPIEC provides. These scores allow filtering the corpus for only high-confidence triples, for example. \n\n\n\"I think that the paper should make it clear in the title or at least in the abstract that the corpus is created automatically by running an OIE system on a large scale. From current title and abstract I was wrongfully expecting a gold human-annotated dataset.\"\n\"I don't think I agree with the claim in Section 4.3 that \"it is the largest corpus with golden annotations to date\". As far as I understand, the presented corpus is created in a completely automated manner and bound to contain prediction errors.\"\n\n-> We carefully revisited the paper to be more precise. For example, we now always refer to \u201cgolden annotations for arguments\u201d.\n\n\n\"Following on previous points, I think the paper misses a discussion on gold vs. predicted datasets for OIE, and their different uses. Some missing gold OIE references: Wu and Weld (2010),  Akbik and Loser (2012), Stanovsky and Dagan (2016).\"\n\n-> To the best of our knowledge, no data was released from Wu and Weld (2010) and Akbik and Loser (2012). We now mention the benchmark dataset of Stanovsky and Dagan (2016) in the related work section. The focus of this work is on large OIE resources, however. \n\n\n- \"I think that some of the implementation decisions seem sometimes a little arbitrary. For instance, for the post-processing example which modifies (Peter Brooke; was a member of; Parliament) to (Peter Brooke; was ; a member of Parliament), I think I would've preferred the original relation, imagining a scenario where you look for all members of parliament (X; was a member of; Parliament), or all of the things Peter Brooke was a member of (Peter Brooke; was a member of; Y) seems more convenient to me.\"\n\n-> We agree with the reviewer in this particular example. However, \u201cMember of Parliament\u201d is a concept in Wikipedia. OPIEC avoids to split concepts or named entities across arguments/relations; such an approach is often erroneous.\n\n\n\u201cI assume that in Table 1, unique relations and arguments are also in millions? I think this could be clearer, if that's the indeed the case.\u201d\n\n-> Fixed\n\n- I think it'd be nice to add dataset sizes to each of the OPIEC variants in Fig 1.\n-> Fixed\n\n- Typos -> fixed\n"}, {"title": "Clarifications", "comment": "\"1) It uses the NLP pipeline and the MinIE-SpaTe system. When you get the results, do you evaluate to what extent that the results are correct?\"\n\"2) In Section 3.4, the author mentioned the correctness is around 65%, what do you do for those incorrect tuples?\"\n\n-> We report on precision and confidence scores in the new subsection 4.6. \n\n\n\"3) Have you tried any task-based evaluation on your dataset?\"\n\nThe OPIEC corpus is being used in other research projects. The goal of this paper is to introduce the dataset to other researchers and provide insight into its properties.\n\n"}, {"title": "New subsection added addressing common reviewers comments", "comment": "Thank you very much for your helpful and insightful comments! All of you asked for more information about precision; we added Section 4.6 \u201cPrecision and Confidence Score\u201d to provide more details and statistics about precision as well as the confidence scores provided with the dataset."}], "comment_replyto": ["rylqOh2HgE", "B1eTN16XfN", "HJxeGb5pTm"], "comment_url": ["https://openreview.net/forum?id=HJxeGb5pTm&noteId=ryxiqG0gV4", "https://openreview.net/forum?id=HJxeGb5pTm&noteId=S1lPJM0xVN", "https://openreview.net/forum?id=HJxeGb5pTm&noteId=B1lrmbClEE"], "meta_review_cdate": 1549959125154, "meta_review_tcdate": 1549959125154, "meta_review_tmdate": 1551128375341, "meta_review_ddate ": null, "meta_review_title": "A nice dataset paper", "meta_review_metareview": "This paper describes a new Open IE corpus over English Wikipedia. All the reviewers agree this paper is suitable for this venue and the dataset is useful. Overall, the paper is well-written and the experiments are convincing. Despite the novelty of this paper is  relatively thin, it is a decent paper.  ", "meta_review_readers": ["everyone"], "meta_review_writers": [], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=HJxeGb5pTm&noteId=SJe6AyZxBN"], "decision": "Accept (Poster)"}