AMSR / conferences_raw /midl19 /MIDL.io_2019_Conference_SJxVA7xleE.json
mfromm's picture
Upload 3539 files
fad35ef
raw
history blame contribute delete
No virus
28.1 kB
{"forum": "SJxVA7xleE", "submission_url": "https://openreview.net/forum?id=SJxVA7xleE", "submission_content": {"title": "Segmenting Potentially Cancerous Areas in Prostate Biopsies using Semi-Automatically Annotated Data", "authors": ["Nikolay Burlutskiy", "Nicolas Pinchaud", "Feng Gu", "Daniel H\u00e4gg", "Mats Andersson", "Lars Bj\u00f6rk", "Kristian Eur\u00e9n", "Cristina Svensson", "Lena Kajland Wil\u00e9n", "Martin Hedlund"], "authorids": ["nikolay.burlutsky@contextvision.se", "nicolas.pinchaud@contextvision.se", "feng.gu@contextvision.se", "daniel.hagg@contextvision.se", "mats.andersson@contextvision.se", "lars.bjork@contextvision.se", "kristian.euren@contextvision.se", "cristina.svensson@contextvision.se", "lena.kw@contextvision.se", "martin.hedlund@contextvision.se"], "keywords": ["Deep Learning", "unet", "prostate cancer", "ground truth", "segmentation"], "abstract": "Gleason grading specified in ISUP 2014 is the clinical standard in staging prostate cancer and the most important part of the treatment decision. However, the grading is subjective and suffers from high intra and inter-user variability. To improve the consistency and objectivity in the grading, we introduced glandular tissue WithOut Basal cells (WOB) as the ground truth. The presence of basal cells is the most accepted biomarker for benign glandular tissue and the absence of basal cells is a strong indicator of acinar prostatic adenocarcinoma, the most common form of prostate cancer. Glandular tissue can objectively be assessed as WOB or not WOB by using specific immunostaining for glandular tissue (Cytokeratin 8/18) and for basal cells (Cytokeratin 5/6 + p63). Even more, WOB allowed us to develop a semi-automated data generation pipeline to speed up the tremendously time consuming and expensive process of annotating whole slide images by pathologists. We generated 295 prostatectomy images exhaustively annotated with WOB. Then we used our Deep Learning Framework, which achieved the 2nd best reported score in Camelyon17 Challenge, to train networks for segmenting WOB in needle biopsies. Evaluation of the model on 63 needle biopsies showed promising results which were improved further by finetuning the model on 118 biopsies annotated with WOB, achieving F1-score of 0.80 and Precision-Recall AUC of 0.89 at the pixel-level. Then we compared the performance of the model against 17 biopsies annotated independently by 3 pathologists using only H&E staining. The comparison demonstrated that the model performed on a par with the pathologists. Finally, the model detected and accurately outlined existing WOB areas in two biopsies incorrectly annotated as totally WOB-free biopsies by three pathologists and in one biopsy by two pathologists.", "pdf": "/pdf/329cbc6d8de887e0e09de51be5d420959d43951e.pdf", "code of conduct": "I have read and accept the code of conduct.", "remove if rejected": "(optional) Remove submission if paper is rejected.", "paperhash": "burlutskiy|segmenting_potentially_cancerous_areas_in_prostate_biopsies_using_semiautomatically_annotated_data", "_bibtex": "@inproceedings{burlutskiy:MIDLFull2019a,\ntitle={Segmenting Potentially Cancerous Areas in Prostate Biopsies using Semi-Automatically Annotated Data},\nauthor={Burlutskiy, Nikolay and Pinchaud, Nicolas and Gu, Feng and H{\\\"a}gg, Daniel and Andersson, Mats and Bj{\\\"o}rk, Lars and Eur{\\'e}n, Kristian and Svensson, Cristina and Wil{\\'e}n, Lena Kajland and Hedlund, Martin},\nbooktitle={International Conference on Medical Imaging with Deep Learning -- Full Paper Track},\naddress={London, United Kingdom},\nyear={2019},\nmonth={08--10 Jul},\nurl={https://openreview.net/forum?id=SJxVA7xleE},\nabstract={Gleason grading specified in ISUP 2014 is the clinical standard in staging prostate cancer and the most important part of the treatment decision. However, the grading is subjective and suffers from high intra and inter-user variability. To improve the consistency and objectivity in the grading, we introduced glandular tissue WithOut Basal cells (WOB) as the ground truth. The presence of basal cells is the most accepted biomarker for benign glandular tissue and the absence of basal cells is a strong indicator of acinar prostatic adenocarcinoma, the most common form of prostate cancer. Glandular tissue can objectively be assessed as WOB or not WOB by using specific immunostaining for glandular tissue (Cytokeratin 8/18) and for basal cells (Cytokeratin 5/6 + p63). Even more, WOB allowed us to develop a semi-automated data generation pipeline to speed up the tremendously time consuming and expensive process of annotating whole slide images by pathologists. We generated 295 prostatectomy images exhaustively annotated with WOB. Then we used our Deep Learning Framework, which achieved the 2nd best reported score in Camelyon17 Challenge, to train networks for segmenting WOB in needle biopsies. Evaluation of the model on 63 needle biopsies showed promising results which were improved further by finetuning the model on 118 biopsies annotated with WOB, achieving F1-score of 0.80 and Precision-Recall AUC of 0.89 at the pixel-level. Then we compared the performance of the model against 17 biopsies annotated independently by 3 pathologists using only H{\\&}E staining. The comparison demonstrated that the model performed on a par with the pathologists. Finally, the model detected and accurately outlined existing WOB areas in two biopsies incorrectly annotated as totally WOB-free biopsies by three pathologists and in one biopsy by two pathologists.},\n}"}, "submission_cdate": 1544713164174, "submission_tcdate": 1544713164174, "submission_tmdate": 1561398744142, "submission_ddate": null, "review_id": ["HkxpgTPj7E", "rJgpZ-HcfV", "r1lXX4qA7V"], "review_url": ["https://openreview.net/forum?id=SJxVA7xleE&noteId=HkxpgTPj7E", "https://openreview.net/forum?id=SJxVA7xleE&noteId=rJgpZ-HcfV", "https://openreview.net/forum?id=SJxVA7xleE&noteId=r1lXX4qA7V"], "review_cdate": [1548610804869, 1547485444951, 1548817435186], "review_tcdate": [1548610804869, 1547485444951, 1548817435186], "review_tmdate": [1548856743018, 1548856705823, 1548856679511], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["MIDL.io/2019/Conference/Paper53/AnonReviewer3"], ["MIDL.io/2019/Conference/Paper53/AnonReviewer1"], ["MIDL.io/2019/Conference/Paper53/AnonReviewer2"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["SJxVA7xleE", "SJxVA7xleE", "SJxVA7xleE"], "review_content": [{"pros": "The paper presents a semi-automated data generation pipeline and a deep learning (DL) framework to segment potentially cancerous areas in prostate biopsies. Six different models were trained using needle biopsies and generated prostatectomy data. The proposed framework has been validated on biopsy data. \n\nThis is an interesting work which fits well to the scope of the conference. The paper presents validation of a DL framework which was originally introduced at [Pinchaud and Hedlund, 2018]. The figures are clear, and the references seem adequate. The paper is well structured, but I found it difficult to follow as many technical details regarding the data generation are missing as explained below. Also, the presented framework has not been compared to any baseline method. ", "cons": "Suggestions for revision\n\n1. In Section 4.1, it is not clear how WOB areas are detected on H&E images. Is this done with manual segmentation? Details about the density estimation filter should be given. Why is it required to \u201cdistribute the local information evenly within a local neighborhood of the image\u201d when applying the density filter? The authors should explain in detail how the heatmaps are generated.\n\n2. In the \u201cUsing consecutive slices\u201d section, which method has been used to register the consecutive H&E stainings to the original H&E images? How does the performance of this registration affect the accuracy of the generated ground truth data?\n\n3. In Section 4.2, the number of pathologists who annotated the biopsy data should be specified rather than defining them as \u201cseveral\u201d. How were the ground truths of these pathologists combined? What is the level of expertise of the pathologists who annotated the data and of those used for the comparison in Section 5.4?\n\n4. The performance evaluation study would have been more robust if it had included comparison to other segmentation approaches.\n", "rating": "3: accept", "confidence": "2: The reviewer is fairly confident that the evaluation is correct"}, {"pros": "-\tA relevant topic: I think the idea of using WOB as a biomarker towards the automation is interesting and worth to explore. \n-\tThe paper is generally well-written and easy to read and understand. They have also provided more details in the appendix which make it easier to cover missing parts from the body of the main paper. I have some suggestion to improve the quality of the paper.\n", "cons": "I would like to write this part as the suggestions:\n-\tSome details seem to be missing (sorry if they are there and I overlooked): For example, what is the final number of samples which is used for training? I can see the final number of 295 images, but I cannot find detailed information about the data division. \n-\tFurther explanation about the \"years of experience\" of the pathologist is useful in putting things into a perspective and also to factor out the human effects. Did they ask for the 4 or 5th pathologist to regrade? Also, for the manual annotation, who have done it? And do they have any confidence value for this annotator?\n-\tComparison of the method with the modality based approach such as MRI and Ultrasound would be interesting.\n-\tThe whole paper would benefit from mentioning the clinical goals and values of the research. \n-\tIt would be interesting in Fig. 3, Sensitivity part, to also look at non-WOB regions sensitivity of P1, P2, P3. \n-\tFigure 1, is not more logical to train first on the coarse image 2 mpp, then refocused training on fine grain image 1 mpp? What are the reasons for the current setting? \n-\tWhat is the reported confidence rate of WOB, especially as a biomarker for indication of the acinar prostatic adenocarcinoma? \n-\tAnalysing and using other methods to control the receptive field is beneficial.\n-\tComparison with other methods which are using other biomarker is missing. \n", "rating": "3: accept", "confidence": "2: The reviewer is fairly confident that the evaluation is correct", "special_issue": ["Special Issue Recommendation"]}, {"pros": "The paper describes a deep learning (DL) framework to segment potentially cancerous areas in prostate biopsies using semi-automatically generated data. To minimize manual effort from pathologists and reduce inter/intra observers' variety, glandular tissue WithOut Basal cells (WOB) class is applied in the semi-automated data generation process. An evaluation on 63 biopsies was conducted and demonstrated the effectiveness generated data for training a DL model.\n\n-The paper is well written and easy to understand.\n-The topic is relevant and interesting. \n-The proposed method was thoroughly evaluated on clinical biopsy data. \n", "cons": "- The work utilizes data annotated using different strategies, acquired from different scanners and has different characteristics (e.g. different Gleason scores). I would suggest authors provide a table to clearly summarize all the information about the datasets.\n\n- What are the Gleason scores (GS) of the training and testing data? What would be the performance for data with different GS? I would suggest the authors to also provide an analysis on data of different GS individually.\n\n- Despite an interesting topic and practical solution, the technical novelty of this work is somewhat limited. ", "rating": "3: accept", "confidence": "3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}], "comment_id": ["ByeVO4Kx44", "Bkgm1o0-E4", "Hye_BMar4E", "SylevM6HVE", "HJx0sZaBEE", "rJlxSvzLV4"], "comment_cdate": [1548944492380, 1549032155447, 1549288000457, 1549288024104, 1549287845851, 1549309752480], "comment_tcdate": [1548944492380, 1549032155447, 1549288000457, 1549288024104, 1549287845851, 1549309752480], "comment_tmdate": [1555946047528, 1555946044822, 1555946035011, 1555946034756, 1555946034541, 1555946033009], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["MIDL.io/2019/Conference/Paper53/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper53/Area_Chair1", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper53/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper53/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper53/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper53/AnonReviewer1", "MIDL.io/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "thank you for the review", "comment": "First of all, our thanks to the anonymous reviewer 3. Your valuable comments will help to clarify some details of the paper.\n\nQ: 1. In Section 4.1, it is not clear how WOB areas are detected on H&E images. Is this done with manual segmentation?\n\nA: WOB areas were generated automatically from immunofluorescence images using an algorithm. We registered immunofluorescence images with corresponding H&E images which allowed us to register WOB areas with H&E images. Then pathologists reviewed the WOB annotations. The reviews led to some minimal manual corrections in the automatically generated masks therefore we call the method semi-automatic.\n\nQ: Details about the density estimation filter should be given. Why is it required to distribute the local information evenly within a local neighborhood of the image when applying the density filter?\n\nA: The resulting WOB segmentation should include whole glands or groups of connected glands. The information in the immunofluorescence channels are not present in the entire gland areas but localized to certain positions within the gland. The markers of the different immunochannels consequently appear at different places within the gland. To make an overall segmentation of the entire gland area, this local information is propagated over the gland by the density filter.\n\nQ: The authors should explain in detail how the heatmaps are generated. \n\nA: The authors will clarify the description for the generation of the initial WOB annotations in the updated paper.\n\nQ: 2. In the Using consecutive slices section, which method has been used to register the consecutive H&E stainings to the original H&E images?\n\nA: The registration was done using non rigid registration with elastix software (http://elastix.isi.uu.nl/). Some examples of registration will be provided in the Appendix.\n\nQ: How does the performance of this registration affect the accuracy of the generated ground truth data?\n\nA: It is difficult to assess the accuracy of the registered slides because we miss the ground truth for them. However, we performed an ablation study showing that the registered slides allowed to consistently improve the performance of the model. This observation suggests that even though the registration are probably noisy, the signal/noise ratio is still favorable. Thus, consecutive slides provided some useful statistics for the model.\n\nQ: 3. In Section 4.2, the number of pathologists who annotated the biopsy data should be specified rather than defining them as \"several\". How were the ground truths of these pathologists combined?\n\nA: The total number of pathologists who annotated the dataset is six and the clarification will be made in the paper. Each slide was exhaustively annotated by one pathologist. We agree that a further study should evaluate inter-pathologist variations. On overall, the pathologists subjectively reported that the WOB / Non WOB annotation with immunostaining was easier than for Malign / Non Malign since the WOB concept is simpler and well defined.\n\nQ: What is the level of expertise of the pathologists who annotated the data and of those used for the comparison in Section 5.4?\n\nA: All the pathologists who annotated the images were carefully selected. The criteria was that the pathologists had proper specialist training in pathology as well as several years of clinical practice in pathology. The number of years of experience varied between 5-20 for both groups.\n\nQ: 4. The performance evaluation study would have been more robust if it had included comparison to other segmentation approaches.\n\nA: We agree that a comparison with other segmentation approaches would help to understand the factors affecting the performances. We chose U-net since this architecture is the state-of-the art network for segmenting tasks in medical image field. \n"}, {"title": "discussion", "comment": "Dear Paper53 reviewers,\n\nPlease take a look at the authors' responses and reflect them to your reviews if necessary, or continue further discussion. Thank you in advance. "}, {"title": "thank you (part 1) ...", "comment": "Dear Anonymous Reviewer 1, thank you for your detailed comments. Please, find our answers below:\n\nQ: - Some details seem to be missing (sorry if they are there and I overlooked):\nFor example, what is the final number of samples which is used for training?\nI can see the final number of 295 images, but I cannot find detailed information about the data division.\n\nA: Each model was trained for 10^6 iterations. Each iteration had a batch of 32 patches. The patches were sampled using quasi online hard example mining described in Appendix 1. Each patch was 188 by 188 pixels. The relatively small size of the patches constrained the sampling areas of hard regions without overshooting these. This information will be added to the paper as well as more detailed information on the datasets used will be added in Appendix.\n\nQ: - Further explanation about the \"years of experience\" of the pathologist is useful in putting things into a perspective and also to factor out the human effects. Did they ask for the 4 or 5th pathologist to regrade?\n\nA: All the pathologists who annotated the images were carefully selected. The criteria was that the pathologists had proper specialist training in pathology as well as several years of clinical practice in pathology. The number of years of experience varied between 5-20. There were six pathologists involved. Every image was exhaustively annotated by one pathologist.\n\nQ: Also, for the manual annotation, who have done it? And do they have any confidence value for this annotator?\n\nA: The total number of pathologists who manually annotated the dataset is six.\n\nQ: - Comparison of the method with the modality based approach such as MRI and Ultrasound would be interesting.\n\nA: Yes, it would be interesting to compare. We will consider to address this question in future work since this comparison will require to conduct a literature review on available and published MRI and Ultrasound results. \n\nQ: - The whole paper would benefit from mentioning the clinical goals and values of the research.\n\nA: The goal of this research is to develop a decision support tool to aid pathologists in their work. We trained deep learning models to segment potentially cancerous areas in prostate biopsies. The models will provide relevant regions of slides for pathologists to focus on, helping them to work more efficiently and save time.\n\nQ: - It would be interesting in Fig. 3, Sensitivity part, to also look at non-WOB regions sensitivity of P1, P2, P3.\n\nA: We are not sure if we understood the question correctly, to our understanding the concept of sensitivity of non-WOB regions would translate to specificity.\n\nQ: - Figure 1, is not more logical to train first on the coarse image 2 mpp, then refocused training on fine grain image 1 mpp? What are the reasons for the current setting?\n\nA: We trained a model on 2 mpp and then 1 mpp and haven\u2019t noticed any statistically significant improvements in the performance nor in time required to train the models until convergence. \n\nQ: - What is the reported confidence rate of WOB, especially as a biomarker for indication of the acinar prostatic adenocarcinoma?\n\nA: To our knowledge, there is no study providing a confidence rate for WOB as a biomarker for indication of the acinar prostatic adenocarcinoma. However, it is reported that \u201cMalignancy is strongly supported by the absolute absence of basal cell staining by IHC in a morphologically suspicious lesion\u201d [1]. There is one important exception which is intraductal carcinoma, where cancer is growing inside a healthy gland with basal cells. This exception is easy to identify and these rare cases were annotated as WOB by the pathologists although the cancer is surrounded by basal cells (the cancer mass is without basal cells).\n\nSome benign mimickers also show no basal cells or just a few weak basal cells. However, these cases are still of clinical interest to be detected by the model - these mimickers will not be called \u201cperfectly healthy glands\u201d.\n \n A risk when working with antibodies is, of cause, that the staining could fail. We only used sections where we could find at least a small area of healthy glands showing stable basal cell staining as a control, and we also used a cocktail of one nuclear (p63) and one cytoplasmic (CK5/6) basal cell marker.\n\n1.Kristiansen, G., & Epstein, J. I. (2014). Immunohistochemistry in Prostate Pathology (pp. 1\u201320). Agilent Technologies."}, {"title": "... continued (part 2)", "comment": "Q: - Analysing and using other methods to control the receptive field is beneficial.\n\nA: We absolutely agree with the suggestion. There are several methods to control the receptive field. In this paper training compound models allowed us to increase the receptive field by training on 2 mpp after 1 mpp. However, the receptive field can be increased by using multiresolution networks [2], using dilated convolutions, varying the architecture of DL networks, to name a few.\n\n2.F Gu, N Burlutskiy, M Andersson, LK Wil\u00e9n \u201cMulti-Resolution Networks for Semantic Segmentation in Whole Slide Images\u201d, Pathology and Ophthalmic Medical Image Analysis, 2018, https://arxiv.org/abs/1807.09607\n\nQ: - Comparison with other methods which are using other biomarker is missing.\n\nA: One of the most established marker for prostate epithelial cells are CK 8 and 18 [1] and its expression is not lost in prostate acinar adenocarcinoma. This enable us to in high detail segment out the parenchyma from the stroma irrespectively of advancement of cancer stages. In a multiplex immunofluorescence combination using the established basal cell marker (CK5/6 + p63) and Rasemase (AMACR) as a general adenocarcinoma marker, we were able to highlight the exact suspected neoplasms with their loss of basal cell layer as well as detect hgPIN neoplasms [1, 2]. In other words, we used the state of the art biomarkers to detect basal cells and epithelial tissue. By combining these, we generated accurately aligned WOB masks on top of the HE stained prostate tissues. \n\n3.Wang, Y., Hayward, S. W., Cao, M., Thayer, K. A., & Cunha, G. R. (2001). Cell differentiation lineage in the prostate. Differentiation, 68(4-5), 270\u2013279. http://doi.org/10.1046/j.1432-0436.2001.680414.x\n2.Hoogland, A. M., Kweldam, C. F., & Leenders, G. J. L. H. V. (2014). Prognostic Histopathological and Molecular Markers on Prostate Cancer Needle-Biopsies: A Review. BioMed Research International, 1\u201312. http://doi.org/10.1155/2014/341324\n1.Kristiansen, G., & Epstein, J. I. (2014). Immunohistochemistry in Prostate Pathology (pp. 1\u201320). Agilent Technologies."}, {"title": "Thank you for the comments", "comment": "Thank you for your valuable input. Please, find below the response to the comments:\n\nQ: The work utilizes data annotated using different strategies, acquired from different scanners and has different characteristics (e.g. different Gleason scores). I would suggest authors provide a table to clearly summarize all the information about the datasets. - What are the Gleason scores (GS) of the training and testing data? \n\nA: A table clarifying Gleason Scores distributions for prostatectomies, and other details on the datasets, will be added to the paper in Appendix. \n\nQ: What would be the performance for data with different GS? I would suggest the authors to also provide an analysis on data of different GS individually. \n\nA: Unfortunately, we will not provide an analysis for the results on different GS individually since the validation set of biopsies was annotated according to WOB definition, not Gleason Scores. Thus, we don\u2019t have Gleason Scores annotations at the pixel level for the data. However, the dataset was chosen according to representative clinical distribution of prostate cancer cases. \n"}, {"title": "Regarding Sesitivity", "comment": "Thanks for answering questions and clarification. In term of my question regarding sensitivity: \n\nSorry for using the wrong term: \n\nAs you have explained in page 2: \"One important exception was made for intraductal cancer of the prostate (IDC-P). IDC-P is represented by high-grade cancer (Gleason grade 4-5) inside a benign gland with basal cells. IDC-P cases were manually annotated as WOB.\" \n\nSo, other than the detection of the presence of basal cells as the indicator of benign regions, the above IDC-P cases which are \"actually non-WithOut Basal cells (WOB)\" have our highest interest because: (1) They are higher grade cancer and detection of them getting have higher importance. (2) These are actually exception cases from the classifier design perspective. So, looking at the sensitivity of positive cases [cancerous cases] is more thorough if we can separately look at the sensitivity of the classifier for the real WOB and manually marked as WOB cases. \n\nTo be more clear, I am really interested to see how many of these manually annotated as WOB cases are miss-classified by the classifier. - Thanks \n"}], "comment_replyto": ["HkxpgTPj7E", "SJxVA7xleE", "rJgpZ-HcfV", "Hye_BMar4E", "r1lXX4qA7V", "Hye_BMar4E"], "comment_url": ["https://openreview.net/forum?id=SJxVA7xleE&noteId=ByeVO4Kx44", "https://openreview.net/forum?id=SJxVA7xleE&noteId=Bkgm1o0-E4", "https://openreview.net/forum?id=SJxVA7xleE&noteId=Hye_BMar4E", "https://openreview.net/forum?id=SJxVA7xleE&noteId=SylevM6HVE", "https://openreview.net/forum?id=SJxVA7xleE&noteId=HJx0sZaBEE", "https://openreview.net/forum?id=SJxVA7xleE&noteId=rJlxSvzLV4"], "meta_review_cdate": 1551356578699, "meta_review_tcdate": 1551356578699, "meta_review_tmdate": 1551881974945, "meta_review_ddate ": null, "meta_review_title": "Acceptance Decision", "meta_review_metareview": "Pros: This paper presents an interesting and novel idea of a deep learning (DL) framework to segment potentially cancerous areas in prostate biopsies using semi-automatically generated data. All reviewers agreed on the significance, quality, clarity and alignment with MIDL conference.\nCons: Some additional details in the method section would be needed. However, the authors addressed them in their rebuttals and are expected to modify the camera-ready version accordingly. \n\nThe application of this paper is quite innovative and the proposed method seems to be a direct assistance to the clinical routine. \n", "meta_review_readers": ["everyone"], "meta_review_writers": ["MIDL.io/2019/Conference"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=SJxVA7xleE&noteId=ryeojGIS8N"], "decision": "Accept"}