{"forum": "EQrvONEwh", "submission_url": "https://openreview.net/forum?id=EQrvONEwh", "submission_content": {"keywords": ["Cancer genetics", "biomedical nlp", "information extraction", "clinical informatics", "knowledge base construction"], "authorids": ["AKBC.ws/2020/Conference/Paper50/Authors"], "title": "Semi-Automating Knowledge Base Construction for Cancer Genetics", "authors": ["Anonymous"], "pdf": "/pdf/ac2104eed65661c20cc9c32a40e3001704b7ec23.pdf", "subject_areas": ["Information Extraction", "Applications"], "abstract": "The vast and rapidly expanding volume of biomedical literature makes it difficult for domain experts to keep up with the evidence. In this work, we specifically consider the exponentially growing subarea of genetics in cancer. The need to synthesize and centralize this evidence for dissemination has motivated a team of physicians (with whom this work is a collaboration) to manually construct and maintain a knowledge base that distills key results reported in the literature. This is a laborious process that entails reading through full-text articles to understand the study design, assess study quality, and extract the reported cancer risk estimates associated with particular hereditary cancer genes (i.e., \\emph{penetrance}). In this work, we propose models to automatically surface key elements from full-text cancer genetics articles, with the ultimate aim of expediting the manual workflow currently in place.\n\nWe propose two challenging tasks that are critical for characterizing the findings reported cancer genetics studies: (i) Extracting snippets of text that describe \\emph{ascertainment mechanisms}, which in turn inform whether the population studied may introduce bias owing to deviations from the target population; (ii) Extracting reported risk estimates (e.g., odds or hazard ratios) associated with specific germline mutations. The latter task may be viewed as a joint entity tagging and relation extraction problem. To train models for these tasks, we induce distant supervision over tokens and snippets in full-text articles using the manually constructed knowledge base. We propose and evaluate several model variants, including a transformer-based joint entity and relation extraction model to extract \\texttt{} pairs. We observe strong empirical performance, highlighting the practical potential for such models to aid KB construction in this space. We ablate components of our model, observing, e.g., that a joint model for \\texttt{} fares substantially better than a pipelined approach. ", "paperhash": "anonymous|semiautomating_knowledge_base_construction_for_cancer_genetics"}, "submission_cdate": 1581705804756, "submission_tcdate": 1581705804756, "submission_tmdate": 1586648874435, "submission_ddate": null, "review_id": ["vvxONIzZ3ch", "2koB_wGMpf5", "ZULMEqsIecl"], "review_url": ["https://openreview.net/forum?id=EQrvONEwh¬eId=vvxONIzZ3ch", "https://openreview.net/forum?id=EQrvONEwh¬eId=2koB_wGMpf5", "https://openreview.net/forum?id=EQrvONEwh¬eId=ZULMEqsIecl"], "review_cdate": [1585601892339, 1585422028848, 1585554933118], "review_tcdate": [1585601892339, 1585422028848, 1585554933118], "review_tmdate": [1587064341413, 1585695521572, 1585695521302], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["AKBC.ws/2020/Conference/Paper50/AnonReviewer4"], ["AKBC.ws/2020/Conference/Paper50/AnonReviewer1"], ["AKBC.ws/2020/Conference/Paper50/AnonReviewer2"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["EQrvONEwh", "EQrvONEwh", "EQrvONEwh"], "review_content": [{"title": "An application of existing techniques to biomedical IE task", "review": "This paper applies modern deep NLP methods (especially transformers) to information extraction from biomedical texts - specifically those dealing with cancer genomics. The task is to extract two pieces of information - sentences that describe ascertainment and entity/relation extraction for risk ratios. To perform this, the authors use distant supervision from manually curated KB. For ascertainment supervision, authors go into some detail, although I have no idea how supervision for relation extraction is derived. The manually curated KB seem to have standardised names for gene mutations which may not occur exactly in the document. How are they matched to entity mentions in the document itself ? \n\nData release : Do the authors intend to release the data which may arguably the most important part of this paper ?\n\nClarity : The section describing joint ER model needs to be re-written. The authors make token level decision for entity classification. How is this used to extract actual entities which may be multi-token ? Is it possible that a sentence has more then 2 entities (biomedical text are infamous for long sentences). If each token is classified, what is role of enumerating spans ? Why not use CRF for example ?\n\nFor the relation extraction part, why is only context between two entities considered (and not the words on either side of them) in equation 3 ?\n\nIn the disjoint model, what do we mean by discarding the sentence ? And what exactly are we concatenating ? Why not just use [CLS] token embedding for classification ?\n\nEvaluation of DS : Can you provide any evaluation of the efficacy of the distant supervision ? In general, how many false positives occur during matching ? Also how was distant supervision generated for Entity/relation extraction part.\n\nCross sentence : Can you comment on how much information might be missed if we only do entity/relation extraction within a sentence ? Are there relations that may be extracted by only considering information across sentences ? \n\nLoss of Info : How much information is lost by not considering tables ? Are there risk ratios never reported in text ? How prevalent are they ?\n\nIn general, I believe it is an interesting application paper that show distant supervision can be employed reasonably well in biomedical domain. But the writing leaves one a bit confused about the exact methodology.\n\n", "rating": "6: Marginally above acceptance threshold", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "Meaningful application, good method design, and comprehensive experiments, but would like to see a test on unlabeled data.", "review": "This paper applies the state-of-art NLP techniques to (semi-) automate information extraction for cancer genetics. \n\nQuality: Good\n\nClarity: Pretty clear and easy to read\n\nOriginality: The authors claim that this the first effort to (semi-) automate the extraction of key evidence from full-text can generics papers.\n\nSignificance of this work: Its significance lies in the application of NLP techniques for cancer genetics KBC. \n\nPros:\n1.\tThe potential application is meaningful, which will help physicians to construct and maintain a cancer genetics KB. \n2.\tThe designed methods and evaluations are reasonable. For extracting snippets of ascertainment text, they generate noisy labels for each sentence from human extracted snippets, and thus, convert this task to a classification task; for extracting risk estimation of germline mutation, they propose a joint model to extract gene and OR entities from each sentence and connect them by predicting each two of them has positive or negative relation. \n3.\tThe experiments are comprehensive and the results are good. They show that their methods surpass some baselines and achieve the best performance. Some examples and an ablation study are included in the Appendix. \n4.\tComprehensive literature review\n5.\tGood writing and clear delivery\n\nCons:\n1.\tThe potential application has not been really tested yet. Even though they show that their methods perform very well on their human-labeled dataset, I would like to see the real usage of these methods on the unlabeled data. I would suggest authors to apply their methods to construct a KB from other cancer genetics papers and do a human evaluation to see the real-world usage of these methods. Or, at least include some extracted information from unlabeled data in the appendix.\n2.\tSome details are missing:\na.\t to derive the labels for ascertainment classification, the author defines three types of sentence representations, are they combined to compute the cos similarity?\nb.\t The author mentioned the \u201cfalse positives\u201d for this labeling method, I would like to see some solutions to this problem. \nc.\tWhat is the matthews correlation coefficient (MCC)? Why do you want to use it besides F1, P, R? \n", "rating": "7: Good paper, accept", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "Successful application of BERT-based models to new tasks, but with limited novelty in the method", "review": "This work address KB construction in biomedical domain. Specifically, it proposes two tasks in cancer genetics domain: (1) extracting text snippets about ascertainment; (2) extract reported risk estimates for different germline mutations. They first created distant supervision based on existing manually extracted KB. For (1), they showed that classifier using BERT (especially SciBERT) based sentence representation significantly outperforms baseline models; for (2), they used a simple combination of BERT token emebeddings and a dense layer to jointly learn to classify spans into entities and their relation type, which performs better than SVM baselines, and disjoint learning baselines. \n\nStrength:\n\n1. This paper proposes two tasks with real world applications, and prepared reasonable size datasets. \n\n2. The paper proposed models based on BERT variants and significantly outperforms simple baselines. \n\nWeakness:\n\n1. The novelty in the method is limited because the techniques used is straightforward combination of existing approaches, for example, sciBERT, using sum of token representation as candidate entity representations, etc.\n\n2. There isn't many new insights about the methods. The advantage of joint learning and advantage of sciBERT vs BERT seem not surprising. The paper could benefit from more error analysis of the BERT-based models, or comparing more variants of how to use the BERT token representation (for example, how to combine them into entity representations), which can help the readers understand better the weakness of the current methods and potential directions for improvement. \n\nQuestions:\n\nSince the ground truth is created using the distant supervision, which is imperfect, for example, the paper pointed out that there's many false positives. How do you ensure the evaluation is not influenced by the errors in the distant supervision? \n\ntypos:\n\nabstract:\n\"We propose two challenging tasks that are critical for characterizing the findings reported cancer genetics studies\" ==> \"...reported in cancer genetics...\"\n\npage 5 bottom:\n\"There can multiple metrics reported throughout the ...\" ==> \"There can be multiple...\"\n", "rating": "6: Marginally above acceptance threshold", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}], "comment_id": ["lEz4ndmLUPq", "ZZDqHwf6jVq", "lMVcn6AgGwB", "Rc-iuvHEzu"], "comment_cdate": [1587064401842, 1586384488140, 1586384610336, 1586383948250], "comment_tcdate": [1587064401842, 1586384488140, 1586384610336, 1586383948250], "comment_tmdate": [1587064401842, 1586384618082, 1586384610336, 1586384009193], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["AKBC.ws/2020/Conference/Paper50/AnonReviewer4", "AKBC.ws/2020/Conference"], ["AKBC.ws/2020/Conference/Paper50/Authors", "AKBC.ws/2020/Conference"], ["AKBC.ws/2020/Conference/Paper50/Authors", "AKBC.ws/2020/Conference"], ["AKBC.ws/2020/Conference/Paper50/Authors", "AKBC.ws/2020/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Scores updated.", "comment": "I have updated my score to 6 conditioned on authors making the necessary changes to make the paper more clear."}, {"title": "Response to reviewer", "comment": "Thank you for your comments on our work, we appreciate the considered feedback. We agree that the primary contribution here is not a novel method --- although we do adopt and extend state-of-the-art methods --- however, we feel that the contribution of bringing these components together and evaluating joint, BERT-based extraction models on a meaningful, novel KB task, and showing that this can be trained via weak/distant supervision, is an in-scope and useful contribution for AKBC. \n\nTo your question: It is reasonable to assume that distant supervision might induce some errors, and in general is imperfect compared to direct supervision. The performance of our models on the test set (derived from a set of unused/new documents) however seems to indicate that DS does not significantly affect the overall quality of the results. Furthermore, we assess a set of misclassified examples (false positives/negatives), specified in the Appendix section. Those examples do indeed indicate that while some sentences might be misclassified, the prediction still relates to the original concepts. \n\nTypos: Thanks for pointing those out. We\u2019ll fix them in the revised manuscript. "}, {"title": "Response to reviewer", "comment": "Thank you for your careful comments on our work, we appreciate your time and effort. Many of the issues you raise concern over the clarity of presentation: We agree that presentation of the work can be improved, and we believe that we can adequately do so for a camera-ready version of the paper, should it be accepted. We address your points individually below. \n\n1. Data Release: Yes! We will release all manual annotations provided to us by physicians (as a CSV). Full text articles can be accessed via their pubmed ids, which we will share. However one would (unfortunately) require some form of institutional access to download the subset of these that are behind a paywall (this of course is beyond our control). We will point to a repository comprising the annotations, model code to work with this, and documentation in the camera-ready version of this paper, should it be accepted. We agree that this is an important contribution of the work.\n\n2. Clarity: We will rewrite this section to clarify these points. Briefly:\nThe entities we\u2019re specifically looking at here are the names of the genes (single token, e.g. brca1, chek2, brca2 etc) and numeric risk estimates (OR=6.5, RR=6.6 etc) which are identified by matching them directly to their corresponding annotation.\nAfter identifying entities within each span (and yes there can be more than two), each pair (name, numeric qty) is checked for a potential relationship. Enumerating this way allows us to go over the entire document efficiently, avoiding unusually long sentences (like you mentioned).\n\n3. Relation extraction part: In Equation 3, we do in fact model context level information by incorporating the [CLS] vector. There is, however, an emphasis on context between the two entities (localized context) among which we are trying to determine a relationship. \n\n4. Disjoint model: Section 3.4 (last para): we do not discard the sentence itself, contextual representation is indeed preserved. We simply discard the additional entity representations concatenated in the previous step. This model was trained to benchmark the performance for our joint model. We will try and convey this more effectively in the revised manuscript. \n\n5. Regarding DS: We apologise for the confusion, but we do not really use distant supervision for the entity-relation extraction task. These quantities are derived from the manual annotations provided to us by physicians. DS is only utilized for the first task of ascertainment classification (specified in table 1, section 3.2 para 2, section 3.3 para 1). We will be more explicit about this in the writing. \n\n6. Loss of info: This is a valid and interesting observation, and could be a potential extension of our work. However, the scope of our work was to extract information from plain text over the full length of these scientific articles. Cross sentence referencing (over very large documents) and extraction from tables would probably require fundamentally different methods, which was not our focus here. We do however hope that our first effort in this domain spurs up additional work in the area. Furthermore, we strictly consider penetrance papers only (meaning all of them should contain the risk estimates information). We will integrate a discussion of this into the revised manuscript.\n\n7. Finally, we would like to clarify that standardized (formal) names for genes do indeed appear exactly in the document, especially in context of reporting risk estimates. As an example, we could consider BRCA (breast cancer type susceptibility protein, tumor suppressor gene family), commonly called the \u201ccaretaker gene\u201d. However, BRCA1 and BRCA2 are unrelated proteins, and have different risk estimates reported for different populations, therefore the use of colloquial names in formal scientific text is practically non-existent. Additionally, we are also provided with a look-up table that matches all the common names to their respective genes. And these entity mentions (estimates) are matched directly via the annotations that are provided to us. "}, {"title": "Response to reviewer", "comment": "Thank you for your encouraging remarks and detailed comments. A few clarifications from our end:\n\n1. Regarding sentence representations: The three types of sentence representations reported in section 3.3 are evaluated independently of each other. We compute cosine similarity using all three of them and report results in table 4. We\u2019ll clarify this further in the revised manuscript. \n2. MCC is generally regarded as a balanced measure which can be used even if the classes are of very different sizes (like in case of the ascertainment classification task). It is simply another metric we report in addition to F, P, R. \n\n3. Real world evaluation: We agree that this would be ideal, but human evaluation in practice will entail a long-term effort that integrates the models into practice over a relatively long period; we hope to perform such an evaluation eventually, but believe the current evaluations suffice to demonstrate the promise of the approach. We would like to point out that all of the examples in Appendix are in fact from an unseen test set, derived from a set of unused (during training/val time) documents. "}], "comment_replyto": ["lMVcn6AgGwB", "ZULMEqsIecl", "vvxONIzZ3ch", "2koB_wGMpf5"], "comment_url": ["https://openreview.net/forum?id=EQrvONEwh¬eId=lEz4ndmLUPq", "https://openreview.net/forum?id=EQrvONEwh¬eId=ZZDqHwf6jVq", "https://openreview.net/forum?id=EQrvONEwh¬eId=lMVcn6AgGwB", "https://openreview.net/forum?id=EQrvONEwh¬eId=Rc-iuvHEzu"], "meta_review_cdate": 1588281925295, "meta_review_tcdate": 1588281925295, "meta_review_tmdate": 1588341534463, "meta_review_ddate ": null, "meta_review_title": "Paper Decision", "meta_review_metareview": "The paper addresses the novel task of information extraction from cancer genomics. The reviewers have applauded the important and meaningful application area, and the comprehensive experimental design that beats state of the art. The approaches are straightforward combination of existing methods. There are also some clarity issues, which we expect authors to fix in the final version. \n", "meta_review_readers": ["everyone"], "meta_review_writers": ["AKBC.ws/2020/Conference/Program_Chairs"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=EQrvONEwh¬eId=xU-aV3gNJoP"], "decision": "Accept"}