{"forum": "ByldG19Ry4", "submission_url": "https://openreview.net/forum?id=ByldG19Ry4", "submission_content": {"title": "A novel segmentation framework for uveal melanoma based on magnetic resonance imaging and class activation maps", "authors": ["Huu-Giao Nguyen", "Alessia Pica", "Francesco La Rosa", "Jan Hrbacek", "Damien C. Weber", "Ann Schalenbourg", "Raphael Sznitman", "Meritxell Bach Cuadra"], "authorids": ["huu.nguyen@artorg.unibe.ch"], "keywords": [], "abstract": "An automatic and accurate eye tumor segmentation from Magnetic Resonance images (MRI) could have a great clinical contribution for the purpose of diagnosis and treatment planning of intra-ocular cancer. For instance, the characterization of uveal melanoma (UM) tumors would allow the integration of 3D information for the radiotherapy and would also support further radiomics studies. In this work, we tackle two major challenges of UM segmentation: 1) the high heterogeneity of tumor characterization in respect to location, size and appearance and, 2) the difficulty in obtaining ground-truth delineations of medical experts for training. We propose a thorough segmentation pipeline consisting of a combination of two Convolutional Neural Networks (CNN). First, we consider the class activation maps (CAM) output from a Resnet classification model and the combination of Dense Conditional Random Field (CRF) with a prior information of sclera and lens from an Active Shape Model (ASM) to automatically extract the tumor location for all MRIs. Then, these immediate results will be inputted into a 2D-Unet CNN whereby using four encoder and decoder layers to produce the tumor segmentation. A clinical data set of 1.5T T1-w and T2-w images of 28 healthy eyes and 24 UM patients is used for validation. We show experimentally in two different MRI sequences that our weakly 2D- Unet approach outperforms previous state-of-the-art methods for tumor segmentation and that it achieves equivalent accuracy as when manual labels are used for training. These results are promising for further large-scale analysis and for introducing 3D ocular tumor information in the therapy planning.", "pdf": "/pdf/7cc6200555fc7676e50490409e6c779e3dcf33ff.pdf", "code of conduct": "I have read and accept the code of conduct.", "remove if rejected": "(optional) Remove submission if paper is rejected.", "paperhash": "nguyen|a_novel_segmentation_framework_for_uveal_melanoma_based_on_magnetic_resonance_imaging_and_class_activation_maps", "_bibtex": "@inproceedings{nguyen:MIDLFull2019a,\ntitle={A novel segmentation framework for uveal melanoma based on magnetic resonance imaging and class activation maps},\nauthor={Nguyen, Huu-Giao and Pica, Alessia and Rosa, Francesco La and Hrbacek, Jan and Weber, Damien C. and Schalenbourg, Ann and Sznitman, Raphael and Cuadra, Meritxell Bach},\nbooktitle={International Conference on Medical Imaging with Deep Learning -- Full Paper Track},\naddress={London, United Kingdom},\nyear={2019},\nmonth={08--10 Jul},\nurl={https://openreview.net/forum?id=ByldG19Ry4},\nabstract={An automatic and accurate eye tumor segmentation from Magnetic Resonance images (MRI) could have a great clinical contribution for the purpose of diagnosis and treatment planning of intra-ocular cancer. For instance, the characterization of uveal melanoma (UM) tumors would allow the integration of 3D information for the radiotherapy and would also support further radiomics studies. In this work, we tackle two major challenges of UM segmentation: 1) the high heterogeneity of tumor characterization in respect to location, size and appearance and, 2) the difficulty in obtaining ground-truth delineations of medical experts for training. We propose a thorough segmentation pipeline consisting of a combination of two Convolutional Neural Networks (CNN). First, we consider the class activation maps (CAM) output from a Resnet classification model and the combination of Dense Conditional Random Field (CRF) with a prior information of sclera and lens from an Active Shape Model (ASM) to automatically extract the tumor location for all MRIs. Then, these immediate results will be inputted into a 2D-Unet CNN whereby using four encoder and decoder layers to produce the tumor segmentation. A clinical data set of 1.5T T1-w and T2-w images of 28 healthy eyes and 24 UM patients is used for validation. We show experimentally in two different MRI sequences that our weakly 2D- Unet approach outperforms previous state-of-the-art methods for tumor segmentation and that it achieves equivalent accuracy as when manual labels are used for training. These results are promising for further large-scale analysis and for introducing 3D ocular tumor information in the therapy planning.},\n}"}, "submission_cdate": 1544621839661, "submission_tcdate": 1544621839661, "submission_tmdate": 1561399991505, "submission_ddate": null, "review_id": ["SJgcaSaw7N", "SyedQ8Zhz4", "BJerbpo3X4"], "review_url": ["https://openreview.net/forum?id=ByldG19Ry4¬eId=SJgcaSaw7N", "https://openreview.net/forum?id=ByldG19Ry4¬eId=SyedQ8Zhz4", "https://openreview.net/forum?id=ByldG19Ry4¬eId=BJerbpo3X4"], "review_cdate": [1548371393988, 1547601440061, 1548692733089], "review_tcdate": [1548371393988, 1547601440061, 1548692733089], "review_tmdate": [1548856726935, 1548856708187, 1548856700549], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["MIDL.io/2019/Conference/Paper16/AnonReviewer3"], ["MIDL.io/2019/Conference/Paper16/AnonReviewer1"], ["MIDL.io/2019/Conference/Paper16/AnonReviewer2"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["ByldG19Ry4", "ByldG19Ry4", "ByldG19Ry4"], "review_content": [{"pros": "The paper is of high quality, clarity, and originality. I have not seen a combination of activation maps, with a statistical shape prior and unets for segmentation before and I think it is a smart idea. The experiments highly support this idea. I'm not familiar with the task at hand so I can not judge on this. For the medical vision community, it seems to be significant for me since it helps to deal with few data points for complex problems.\n\nI'm not sure if I got it right that M-2DUnet is basically the same network but trained with manual delineations instead of the weakly-supervised approach with activation maps. In case that is correct it is actually nice and impressive that the Wilcoxon test does not show a significant difference for that case and I think it is worth to stress that more - even in the abstract and the conclusion!\n\nThe paper is slightly above the page limit, this is mainly due to figures and table and I think it is adequate.", "cons": "The paper does not state to release source code and the data is not publicly available. Therefore it might be hard to reproduce.\nOne experiment that could be interesting would be to compare the approach to using the activation maps directly instead of the tumor location - that way we would get an insight if the shape prior actually helps.\nIt has some minor typos and should be proofread or put through grammarly (e.g. methods has been, software( Varian, Figure7)\nThe figures have artifacts from a spellchecker.\n", "rating": "4: strong accept", "confidence": "2: The reviewer is fairly confident that the evaluation is correct", "special_issue": ["Special Issue Recommendation"], "oral_presentation": ["Consider for oral presentation"]}, {"pros": "The main contribution of the paper: The authors attempt to create automatic training label based on class activation map, which showed comparable results to manual-labeled training dataset. Specifically, the authors use CNN-based classification + Grad-CAM (along with ASM + Dense CRF) to automatically generate training data as the input of a 2D U-net. (Post-processing includes: ASM and Grad-CAM).\n\nIt\u2019s an interesting attempt to alleviate the requirement manual labeling (although the result might only be dataset-specific). Plus, it shows a feasible application of combined network design in the field of of ophthalmic MR imaging.\n\nThe article is clearly written and structured. The presented figures well reflect the proposed framework as well as demonstrated the results with selected representative samples.\n", "cons": "-\tThe activation map showed in Figure 2\u2019s pipeline clearly demonstrated that the tumor should be the region that differentiates groups. However, in Figure 4, the entire sclera region is also activated, which is significantly different from Figure 2. It is unclear where such difference comes from, since that should not be different in the sclera region between the normal and diseased eye, and need to be explained more clearly.\n\n-\tGenerally, CAM can only be used to identify the differential region locations roughly, rather than delineating the segmentation even accurately (i.e. unsupervised CNN-based segmentation). The representing figure shown in Figure 4/7 indicate that the accuracy results depends heavily on the tissue contrast. That might indicate that the performance of the proposed methods maybe specific to the recruited dataset (e.g. more cases like shown in Fig 7 right-most column).\n\n-\tIn Figure 4 (c), it\u2019s hard to see the improvement of applying ASM over to the dense CRF. It would be better to show a more representative figure or quantitative analysis the Dice when comparing them with the manual segmentation.\n \n-\tIn Figure 6: The author compared different segmentation approaches, and essentially showing that 2D network is better than 3D network, and 3D-CNN is better than 3D-Unet. I agree with the author that this should mainly be due to the small training set. The data augmentation with elastic deformation will help to alleviate the problem, which is used in both the original 2D U-net paper (by Ronneberger et al. 2015) and the 3D U-net (\u00c7i\u00e7ek et al. 2016. However, based on the method part, the authors doesn\u2019t seems to use this data augmentation method.\n\n-\tFigure 7 mid-column, the author showed cases that their method (Grad-CAM+ASM+denseCRF) can \u201ccorrect\u201d the manual segmentation.\no\tI suppose they mean the automatic result is better than the \u201cmanual segmentation\u201d, as they\u2019re not training their method based on the manual segmentation.\no\tThis indicate there are errors in the manual segmentation. Then it\u2019s questionable to use such manual label as ground truth. In that case, multiple manual segmentation with inter-rater variability analysis might be needed to construct and validate the ground truth.\n\n\nSome minor issue that need proofread:\n-\tThe figure seems to be screen captured without removing clean-up some software-based marks\no\tFigure 2/3 has red dot line indicating word correction mark from word\no\tFigure 7, the selection dots representing the selection window should be removed\n-\tPage 5: in section \u201cRefinement\u201d:\no\tthe word \u201cand\u201d should be put before k(fi,fj)\no\tparagraph after equation (3): shouldn\u2019t use j as the subscript for wj, as j has already been used in the equation to represent the second pixel.\n-\tPage 6: in section \u201cUnnet\u201d: \no\teffectually => effectively\n-\tFigure 6, an additional \u201cn\u201d is placed before Grad-CAM-2DUnet\n", "rating": "3: accept", "confidence": "2: The reviewer is fairly confident that the evaluation is correct"}, {"pros": "The paper is proposing a segmentation method for eye tumor segmentation from MRI. The proposed approach leverages CNN based architectures to create activation maps that subsequently refined the the use of ASM and CRF in order to create training data that subsequently were used as input to a UNET architecture. \nThe paper includes an extended state of the art review. \n\nA weakly supervised approach is implemented.", "cons": "The dataset is limited. \n\nThe authors should quantify the effect of the ASM and CRF steps to the final segmentation outcome. \n\nThe False positive and True positive fractions should also be reported. Reporting Hausdorff distance should also be considered. \n\nLimited discussion/conclusion section. The authors should extend the section to compare the methodology proposed with ones existing in the literature and further analyze the technical innovations that make this approach superior to already proposed ones. \n", "rating": "3: accept", "confidence": "2: The reviewer is fairly confident that the evaluation is correct"}], "comment_id": ["B1lhoXAcE4", "B1g0yER54E", "SkgAQV0qVN"], "comment_cdate": [1549620132191, 1549620197935, 1549620262400], "comment_tcdate": [1549620132191, 1549620197935, 1549620262400], "comment_tmdate": [1555946014414, 1555946014193, 1555946013975], "comment_readers": [["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["MIDL.io/2019/Conference/Paper16/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper16/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper16/Authors", "MIDL.io/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Reply to reviewer's comments", "comment": "Thank you for your time and comments. We thank the reviewer for providing a high rating to our paper. We confirm her/his comment about M-2DUnet is right. We will edit our paper to make this point clearer and correct all typos and figure artifacts. \nWe agree with the reviewer that an interesting experiment would be to use the activation maps directly to the network. We will further investigate this idea but in its current format we are already beyond the allowed number of pages. We are in line with the reviewer\u2019s philosophy of reproducibility. We are working to extend our work into a journal article; including more data and also with the aim to better understand whether Activation Maps could correlate with a clinical marker. We are also considering the possibility to make the code and the data available under the respect of internal institute rules and national law. "}, {"title": "Reply to reviewer's comments", "comment": "Thank you for your comments. We are please that you found our work of interest. We apologize that some quantitative analysis details were missing. As much as space allows, we will add Dice (also requested by Reviewer 1), TPR, FPR and volume values and we will discuss them to demonstrate the effect of all algorithms. "}, {"title": "Reply to reviewer's comments", "comment": "Thank you for your time and effort in reviewing our paper. Your very detailed comments are valuable to improve our paper. First, we are please to note that you found our work clear and our contribution of interest.\n\nThe observation about Figure 4 is right: it is different from Figure 2 because we used two different methods for propagating positive gradients modifications. We selected this example in Figure 4 with the sclera region activated to show the contribution of shape prior information coming from the ASM. However, based on the reviewer\u2019s comment we realize that this is confusing. We will unify both results and replace Figure 4(b) by the same method as used in Figure 2 when submitting the camera-ready version. We will also add the Dice values (compare to manual segmentations) to show the effect of ASM and Dense CRF in Figure 4(c).\n\nConcerning the raised comment \u201cGenerally, CAM can only be used to identify\u2026 The representing figure shown in Figure 4/7 indicate that the accuracy results depends heavily on the tissue contrast\u2026.\u201d, we believe these is some confusion.\nIn Figure 7 (right column) the proposed method can distinguish the \u201cpathological\u201d tissue from the healthy tissues/anatomy. However, our algorithm is not trained to distinguish between tumor and retinal detachment. The differentiation of both will be needed for clinical use and needs further development. A solution could be to post-process with texture feature analysis to separate the tumor and retinal detachment.\n\nRegarding figure 6, we apologize that data augmentation details were missing in our initial description. We confirm that rotation, shift as well as elastic deformation were used. We will also add this in the paper. \n\nWith respect to figure 7, the reviewer made a very important observation. In practice, manual delineations may be inaccurate and also contain some errors. In that sense it would be very important to have multiple expert segmentations and also evaluate the inter-rater variability. Unfortunately, this is often very difficult if not impossible to obtain. This issue was a major motivation for this line of work, in order to train a system with small amounts of annotations. We will modify the sentence as pointed out by the reviewer (\u201c\u2026the result is better\u2026\u201d).\n\nAll minor typos raised by the reviewer have been corrected in camera-ready version."}], "comment_replyto": ["SJgcaSaw7N", "BJerbpo3X4", "SyedQ8Zhz4"], "comment_url": ["https://openreview.net/forum?id=ByldG19Ry4¬eId=B1lhoXAcE4", "https://openreview.net/forum?id=ByldG19Ry4¬eId=B1g0yER54E", "https://openreview.net/forum?id=ByldG19Ry4¬eId=SkgAQV0qVN"], "meta_review_cdate": 1551356584500, "meta_review_tcdate": 1551356584500, "meta_review_tmdate": 1551881975967, "meta_review_ddate ": null, "meta_review_title": "Acceptance Decision", "meta_review_metareview": "The reviewers agree on the novelty and quality of the paper. In spite of the very positive evaluations, all reviewers have identified a number of issues, which have been very appropriately addressed in the rebuttal. I agree with the authors that - even though interesting - adding the activation maps directly to the network might be left for future work, but it would be important to clarify the issues regarding the figures (Reviewer 1) and performance (Reviewers 1 and 2).", "meta_review_readers": ["everyone"], "meta_review_writers": ["MIDL.io/2019/Conference"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=ByldG19Ry4¬eId=SylenGIHUE"], "decision": "Accept"}