{"forum": "F1MIJCqX2J", "submission_url": "https://openreview.net/forum?id=DKu6-Bie3w", "submission_content": {"authorids": ["yding5@nd.edu", "jliu16@nd.edu", "xiao.wei.xu@foxmail.com", "huangmeiping@126.com", "xiaowei.xu.xxw@gmail.com", "jinjun@us.ibm.com", "yshi4@nd.edu"], "abstract": "State-of-the-art deep learning based methods have achieved remarkable performance on medical image segmentation. Their applications in the clinical setting are, however, limited due to the lack of trustworthiness and reliability. Selective image segmentation has been proposed to address this issue by letting a DNN model process instances with high confidence while referring difficult ones with high uncertainty to experienced radiologists. As such, the model performance is only affected by the predictions on the high confidence subset rather than the whole dataset. Existing selective segmentation methods, however, ignore this unique property of selective segmentation and train their DNN models by optimizing accuracy on the entire dataset. Motivated by such a discrepancy, we present a novel method in this paper that considers such uncertainty in the training process to maximize the accuracy on the confident subset rather than the accuracy on the whole dataset. Experimental results using the whole heart and great vessel segmentation and gland segmentation show that such a training scheme can significantly improve the performance of selective segmentation. ", "paper_type": "methodological development", "authors": ["Yukun Ding", "Jinglan Liu", "Xiaowei Xu", "Meiping Huang", "Jian Zhuang", "Jinjun Xiong", "Yiyu Shi"], "track": "full conference paper", "keywords": [], "title": "Uncertainty-Aware Training of Neural Networks for Selective Medical Image Segmentation", "paperhash": "ding|uncertaintyaware_training_of_neural_networks_for_selective_medical_image_segmentation", "pdf": "/pdf/88c23d55130bb27f544fb4de97d7def25d325ded.pdf", "_bibtex": "@inproceedings{\nding2020uncertaintyaware,\ntitle={Uncertainty-Aware Training of Neural Networks for Selective Medical Image Segmentation},\nauthor={Yukun Ding and Jinglan Liu and Xiaowei Xu and Meiping Huang and Jian Zhuang and Jinjun Xiong and Yiyu Shi},\nbooktitle={Medical Imaging with Deep Learning},\nyear={2020},\nurl={https://openreview.net/forum?id=DKu6-Bie3w}\n}"}, "submission_cdate": 1579955672989, "submission_tcdate": 1579955672989, "submission_tmdate": 1587172196071, "submission_ddate": null, "review_id": ["2GKuZ__P49", "2Sd0yHWBkx", "IOIDtQK7OI"], "review_url": ["https://openreview.net/forum?id=DKu6-Bie3w¬eId=2GKuZ__P49", "https://openreview.net/forum?id=DKu6-Bie3w¬eId=2Sd0yHWBkx", "https://openreview.net/forum?id=DKu6-Bie3w¬eId=IOIDtQK7OI"], "review_cdate": [1584305343466, 1583878296388, 1583022529929], "review_tcdate": [1584305343466, 1583878296388, 1583022529929], "review_tmdate": [1585229473029, 1585229472532, 1585229471977], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["MIDL.io/2020/Conference/Paper102/AnonReviewer1"], ["MIDL.io/2020/Conference/Paper102/AnonReviewer3"], ["MIDL.io/2020/Conference/Paper102/AnonReviewer2"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["F1MIJCqX2J", "F1MIJCqX2J", "F1MIJCqX2J"], "review_content": [{"title": "Solid paper on \"plug-and-play\" way of imposing uncertainty as a part of learning scheme.", "paper_type": "both", "summary": "The paper presents a model-independent approach to consider uncertainty which consequently helps the overall segmentation performance. It provides several concepts of uncertainty to consider during training and shows a proxy loss function to explicitly account for it. It overall provides a nice set of experiments and analyses for interesting takeaway messages regarding uncertainty.", "strengths": "1. Well written and easy to read.\n2. The distinction between the types of scoring rules is appreciated.\n3. Extensive experiments with thorough analyses.\n4. Consistent improvement across several experimental setups.", "weaknesses": "I do not have major comments on weaknesses. A minor comment is on self-containedness with the uncertainty map figure not being in the main paper. This is quite minor though. Another minor comment is the lack of mentioning of existing uncertainty estimation methods (e.g., MC-dropout) which could output a different type of uncertainty which may replace the softmax.", "questions_to_address_in_the_rebuttal": "1. It was not clear in 2.1. whether the threshold t splits images (i.e., 3D volumes) or voxels, since the threshold seemed to be applied for the individual voxel-level uncertainty u_i. I think each \"instance\" is still a voxel. If so, how could radiologists do manual segmentation? Each image could have quite isolated voxels in X_l and X_h.\n", "rating": "4: Strong accept", "justification_of_rating": "It is an overall solid paper with well-written details and thorough experiments. The method is simple and reasonable, although I wonder if the softmax is the only uncertainty measure it could consider. Still, there are interesting observations from the experimental analyses that may benefit readers.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct", "recommendation": ["Oral", "Poster"], "special_issue": "yes"}, {"title": "Interesting idea but difficult to read", "paper_type": "methodological development", "summary": "The paper presents a novel method for selective segmentation that tries to maximize the performance on a practical target instead of the full training target by introducing an uncertainty loss. The proposed training scheme can be applied to most existing segmentation frameworks in a plug-and-play manner. The method was evaluated two datasets (MM-WHS and GlaS) and outperformed the baseline (without the proposed uncertainty loss) in all metrics.", "strengths": "1) By focusing on uncertainty, this work addresses an important direction in medical image segmentation. Unlike many other works related to uncertainty in medical image segmentation, the paper introduces a new principle, selective segmentation, which is borrowed from the classification literature.\n\n2) The method is evaluated extensively. The paper demonstrates the benefit of the method on different metrics, two datasets, and several experiments. The multitude of experiments helps the reader to get a better understanding of the approach. Also, the hyperparameter analysis is quite helpful.\n\n3) The problem and main terms are well-introduced (especially Figure 1) and are beneficial for the general understanding.\n\n4) The authors provide code. I consider this is very important when introducing a new method because it improves reproducibility. Unfortunately, there are still a lot of papers presenting new methods without code.", "weaknesses": "1) In my opinion, the main weakness of the paper is the writing and the lack of clear messages. In detail, this leads to the following problems:\na) Difficult to read. The paper lacks a description of the idea in simple words easy to understand. Although the method seems valid, I find it hard to follow all the details in section 2.2. An improved structure, including repetitions of the important information, would be beneficial. An example is the description of $\\gamma$ (including the theorem), which is described in detail. However, the final loss does not contain $\\gamma$ because of its non-differentiability. I believe this could be simplified.\nb) The motivation for the uncertainty loss is unclear. The softmax cross-entropy is described as a proper scoring rule which tries to recover the actual distribution $q$. It is also stated that selective segmentation does not require recovering the distribution $q$. It is unclear why one should not recover the actual $q$ (even though not required) with the cross-entropy loss if this loss is anyway used to optimize the segmentation task. Or, why is the uncertainty loss even needed if the cross-entropy is already trying to do more than required?\nc) The benefit is not obvious. Although the experiments help to understand the method better, the benefits of the proposed method are not obvious. It seems that the initial Dice coefficient performance already improves (although c=1 not shown in Table 1) with the proposed method. However, it is not clear whether the improved performances at the subsequent coverage values (e.g., 0.95, 0.9, \u2026) are due to an improved uncertainty or the initial benefit. It seems that the baseline has higher deltas between two consecutive coverage values. Additional clarifications of the results and a more extensive discussion of the results are required to improve the understanding.\n\n2) The adoption of the proposed setup is limited. As described in the introduction, selective segmentation only predicts voxels that the model is certainty about, and the remaining are left for expert annotation (Figure 1). This setup is, in my opinion, not realistic. If a radiologist has to annotate all uncertain voxels in an image, the time gain compared to full manual annotation will most likely be very limited.", "questions_to_address_in_the_rebuttal": "I would like the authors to address the points 1a-1c of the weaknesses above.", "detailed_comments": "Minor:\n- It is unclear how the uncertainty loss is obtained from $\\gamma$. It could be helpful to clarify why the non-differentiability of $\\gamma$ results in the uncertainty loss. \n- In the supplementary material is written that $\\lambda=2$ has been used, but at the same time, it is also mentioned (in Appendix C.) that the uncertainty loss is increased by a factor of 1000. Isn\u2019t $\\lambda=2000$, then?\n- For a continuation of this work, it might also be interesting to adapt/weight the segmentation loss with the level of uncertainty.\n\nTypos:\n- \u201cactical target\u201d instead of \u201cpractical target\u201d (2 times)\n- In the introduction, the authors write: \u201cMore importantly, we show that the practical target can be decomposed into the training target and a novel uncertainty target\u201d. Shouldn\u2019t it rather be: \"... we show that the training target can be decomposed into the practical target ...\" training target can be decomposed into the practical target and a novel uncertainty target\u201d (according to Figure 1)? ", "rating": "2: Weak reject", "justification_of_rating": "The paper provides an interesting approach and extensive evaluation. Unfortunately, the structure and writing make the paper hard to read and understand. Therefore, I suggest rejection of the paper unless the readability is improved.", "confidence": "3: The reviewer is fairly confident that the evaluation is correct", "recommendation": [], "special_issue": "no"}, {"title": "The authors designed an interesting uncertainty-aware method for semantic segmentation and tested on cardiac and gland images.", "paper_type": "methodological development", "summary": "1. The authors designed an interesting uncertainty-aware method for semantic segmentation and tested on cardiac and gland images.\n\n2. The paper is well-written with some descriptions in Appendix.\n\n3. Experiments results are significance and comparison results seem good.\n\n4. The proposed method is novel.", "strengths": "1. The authors designed an interesting uncertainty-aware method for semantic segmentation and tested on cardiac and gland images.\n\n2. The paper is well-written with some descriptions in Appendix.\n\n3. Experiments results are significance and comparison results seem good.\n\n4. The proposed method is novel.", "weaknesses": "1. Some details of the method is missing.\n\n2. The reference included for the MM-WHS segmentation challenge was wrong.\n\nZhuang, Xiahai, et al. \"Evaluation of algorithms for Multi-Modality Whole Heart Segmentation: An open-access grand challenge.\" Medical image analysis 58 (2019): 101537.", "questions_to_address_in_the_rebuttal": "The authors designed an interesting uncertainty-aware method for semantic segmentation and tested on cardiac and gland images. The paper is well-written, I just have some suggestions:\n\n1. Is there an automated and adaptive method to determine parameter c? Please elaborate more details.\n\n2. The experiments have been done on MM-WHS challenge and please refer to the correct reference:\n\nZhuang, Xiahai, et al. \"Evaluation of algorithms for Multi-Modality Whole Heart Segmentation: An open-access grand challenge.\" Medical image analysis 58 (2019): 101537.\n\n3. Are the results in Table 1 got statistical significance?\n\n4. The other concern is that how the proposed framework can cope with the real clinical studies?", "detailed_comments": "The authors designed an interesting uncertainty-aware method for semantic segmentation and tested on cardiac and gland images. The paper is well-written, I just have some suggestions:\n\n1. Is there an automated and adaptive method to determine parameter c? Please elaborate more details.\n\n2. The experiments have been done on MM-WHS challenge and please refer to the correct reference:\n\nZhuang, Xiahai, et al. \"Evaluation of algorithms for Multi-Modality Whole Heart Segmentation: An open-access grand challenge.\" Medical image analysis 58 (2019): 101537.\n\n3. Are the results in Table 1 got statistical significance?\n\n4. The other concern is that how the proposed framework can cope with the real clinical studies?", "rating": "4: Strong accept", "justification_of_rating": "The authors designed an interesting uncertainty-aware method for semantic segmentation and tested on cardiac and gland images. The paper is well-written, I just have some suggestions:\n\n1. Is there an automated and adaptive method to determine parameter c? Please elaborate more details.\n\n2. The experiments have been done on MM-WHS challenge and please refer to the correct reference:\n\nZhuang, Xiahai, et al. \"Evaluation of algorithms for Multi-Modality Whole Heart Segmentation: An open-access grand challenge.\" Medical image analysis 58 (2019): 101537.\n\n3. Are the results in Table 1 got statistical significance?\n\n4. The other concern is that how the proposed framework can cope with the real clinical studies?", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct", "recommendation": ["Oral"], "special_issue": "yes"}], "comment_id": ["NeoYRW_32mY", "df5FmSeL82P", "6ZfH-wTG0u4", "vy_AN3LXpGq"], "comment_cdate": [1585902887371, 1585366659508, 1585366507494, 1585366157216], "comment_tcdate": [1585902887371, 1585366659508, 1585366507494, 1585366157216], "comment_tmdate": [1585902887371, 1585366659508, 1585366507494, 1585366157216], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["MIDL.io/2020/Conference/Paper102/AnonReviewer3", "MIDL.io/2020/Conference"], ["MIDL.io/2020/Conference/Paper102/Authors", "MIDL.io/2020/Conference"], ["MIDL.io/2020/Conference/Paper102/Authors", "MIDL.io/2020/Conference"], ["MIDL.io/2020/Conference/Paper102/Authors", "MIDL.io/2020/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Good clarifications", "comment": "Thank you for your response. My main concern was the clarity of the method section. The authors seem to have improved this part by dividing section 2.2 into two sections and by rewriting some descriptions. Although I cannot verify the improvement, I assume the authors addressed my main concern and will consequently update my score.\n\nI find the results in (c) quite helpful. Please consider adding them as supplementary material. "}, {"title": "Response to Reviewer2", "comment": "We thank you for your positive and constructive feedback. \n\n(1) The simplest way to determine c is by looking at the risk-coverage curve with the desired risk level with some safety margin. As mentioned in Sec. 2.1, there is also an automated and adaptive method available to get c. The Selection with Guaranteed Risk (SGR) algorithm in (Geifman and El-Yaniv, 2017) finds a proper threshold t (which in turn determines c) to achieve the desired performance level with a certain probability. Although it was proposed in a classification setting, it can be applied to segmentation by considering each image as the input sample and taking segmentation quality as the classification loss. \n\n(2) We are sorry for the reference error. It has been replaced with the correct one. \n\n(3) We did the unpaired two-sample t-test and found that there is a statistical significance (two-tailed, p=0.0002/0.0401 for MM-WHS/GlaS). \n\n(4) In clinical practice, the uncertainty-augmented segmentation can be used to improve human-machine collaboration (Nair et al., 2019). The entire segmentation results by the model are presented to radiologists with the uncertainty information (e.g., through color coding for different levels of uncertainty). Instead of checking the entire image carefully including non-interested contours/regions, radiologists can only pay special attention to the uncertain part. This can be useful for example when the segmentation results need to be read by radiologists for further diagnosis, or in the case of manual annotation assistance by radiologists is needed.\n"}, {"title": "Response to Reviewer3", "comment": "We thank you for your time and thoughtful review. \n\n(a) We have carefully revised the manuscript following your suggestions to improve the clarity. Specifically, we have rewritten some descriptions and divide Section 2.2 into two sections to clearly separate the description around \\gamma and the practical uncertainty loss. Regarding the message of Sec 2.2, the description of \\gamma is an analysis of the problem in a rigorous manner, which is a key contribution of this paper. We have shown that \\gamma is the right objective to transfer the original loss function to a suitable one for selective segmentation. However, because it is not clear how to directly optimize \\gamma, we resort to an approximation to it. We discussed how \\gamma is related to the uncertainty loss used after Equ. 2 (page 5). Given the description of \\gamma, it is possible for follow up research to find a better approximation to \\gamma, which is left for future work.\n\n(b) It is true that if the cross-entropy loss can recover the ground truth distribution q then no uncertainty loss is needed. However, giving the limited capability of the model and practical issues, using cross-entropy loss does not ensure perfect recovery of q. As a result, an uncertainty loss is needed to work as complementary to the cross-entropy. When used together, the final loss is more aligned with our practical target which improves the performance. As an analogy, when a student gets a full score by solving 5 problems out of a total of 10 in a test. Solving all 10 problems would not hurt, but when the time is not sufficient, it is better to focus on the easiest 5 of them instead of all problems. The same idea has motivated related research in the classification scenario (Geifman and El-Yaniv, 2019). \n\n(c) As both our method and the baseline use the same network (with different loss functions), the initial benefit at c=1.0 is a regularization effect of the new loss function which does not penalize heavily for the low confidence wrong prediction. If the benefit comes from the regularization effect only, then we would expect to see that the Dice difference between our method and the baseline monotonically decreases as coverage reduces. This is because the uncertainty parts which are prone to error are continuously removed, and thus the largest possible benefit margin also decreases. When the remaining instances are almost all easy and confidently predicted, different models will have the same performance despite different uncertainty estimation. However, when we look at the Dice difference between 1.00 and 0.95 as shown below.\n\nCoverage: 1.00 0.995 0.99 0.98 0.97 0.96 0.95\nMM-WHS: 0.83 0.94 0.96 0.95 0.92 0.89 0.86\nGlaS: 1.82 1.87 1.99 2.10 2.13 2.16 2.18\n\nWe can see that the Dice difference first increases as the coverage decreases before it starts to decrease. This should come from better uncertainty estimation. MM-WHS is an easier dataset and thus the difference decreases earlier.\n"}, {"title": "Response to Reviewer1", "comment": "Thanks for your careful review and thoughtful comments!\n\n(1) We actually mentioned the existing uncertainty estimation methods in Appendix A.1, but we agree that the possibility of using alternative uncertainty estimation methods, which is a promising future research topic, worth more discussion. We note that the back-propagation for uncertainty loss requires all forward passes of the MC-dropout are done and this will drastically increase the memory consumption to save the intermediate results of each pass. Explicit discussions about this have been added to Appendix A.1. We also moved the uncertainty map figure to the main paper. \n\n(2) For the image split, currently the threshold is applied for individual voxels/pixels following earlier work in this direction (Nair et al., 2019; Sander et al., 2019). In experiments, most of the uncertain/certain pixels are quite clustered. In clinical practice, the proposed framework can be used to improve human-machine collaboration. The entire segmentation results by the model are presented to radiologists with the uncertainty information (e.g., through color coding for different levels of uncertainty). Instead of checking the entire image carefully including non-interested contours/regions, radiologists can only pay special attention to the uncertain part. This can be useful for example when the segmentation results need to be read by radiologists for further diagnosis, or in the case of manual annotation assistance by radiologists is needed."}], "comment_replyto": ["6ZfH-wTG0u4", "IOIDtQK7OI", "2Sd0yHWBkx", "2GKuZ__P49"], "comment_url": ["https://openreview.net/forum?id=DKu6-Bie3w¬eId=NeoYRW_32mY", "https://openreview.net/forum?id=DKu6-Bie3w¬eId=df5FmSeL82P", "https://openreview.net/forum?id=DKu6-Bie3w¬eId=6ZfH-wTG0u4", "https://openreview.net/forum?id=DKu6-Bie3w¬eId=vy_AN3LXpGq"], "meta_review_cdate": 1586183340685, "meta_review_tcdate": 1586183340685, "meta_review_tmdate": 1586183340685, "meta_review_ddate ": null, "meta_review_title": "MetaReview of Paper102 by AreaChair1", "meta_review_metareview": "The reviewers agree that the proposed method is novel and interesting, and that the results are backed by good experimental results. The questions raised during teh review have been answered well. I thus recomend this paper be accepted.", "meta_review_readers": ["everyone"], "meta_review_writers": ["MIDL.io/2020/Conference/Program_Chairs", "MIDL.io/2020/Conference/Paper102/Area_Chairs"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=DKu6-Bie3w¬eId=Bz-DVT4ryn"], "decision": "accept"}