AMSR / conferences_raw /midl20 /MIDL.io_2020_Conference_H-PvDNIex.json
mfromm's picture
Upload 3539 files
fad35ef
raw
history blame contribute delete
No virus
9.89 kB
{"forum": "H-PvDNIex", "submission_url": "https://openreview.net/forum?id=fPvWLY0LAa", "submission_content": {"keywords": ["Brain Tumour Segmentation", "Deep Neural Network", "Uncertainty Evaluation"], "TL;DR": "Developing a metric to evaluate uncertainties produced for the task of brain tumour segmentation", "track": "short paper", "authorids": ["raghav@cim.mcgill.ca", "angelos.filos@cs.ox.ac.uk", "yarin.gal@cs.ox.ac.uk", "arbel@cim.mcgill.ca"], "title": "Uncertainty Evaluation Metrics for Brain Tumour Segmentation", "authors": ["Raghav Mehta", "Angelos Filos", "Yarin Gal", "Tal Arbel"], "paper_type": "methodological development", "abstract": "In this paper, we describe and explore the metric that was designed to assess and rank uncertainty measures for the task of brain tumour sub-tissue segmentation in the BraTS 2019 sub-challenge on uncertainty quantification. The metric is designed to (1) reward uncertainty measures where high confidence is assigned to correct assertions, and where incorrect assertions are assigned low confidence and (2) penalize measures that have higher percentages of under-confident correct assertions. Here, the workings of the metrics explored based on a number of popular uncertainty measures evaluated on the BraTS2019 dataset", "paperhash": "mehta|uncertainty_evaluation_metrics_for_brain_tumour_segmentation", "pdf": "/pdf/4d92dac5b5dcbb30927e267656253bb7ceeb062e.pdf", "_bibtex": "@inproceedings{\nmehta2020uncertainty,\ntitle={Uncertainty Evaluation Metrics for Brain Tumour Segmentation},\nauthor={Raghav Mehta and Angelos Filos and Yarin Gal and Tal Arbel},\nbooktitle={Medical Imaging with Deep Learning},\nyear={2020},\nurl={https://openreview.net/forum?id=fPvWLY0LAa}\n}"}, "submission_cdate": 1579955770322, "submission_tcdate": 1579955770322, "submission_tmdate": 1587172187763, "submission_ddate": null, "review_id": ["vEAsB5Apd9R", "8q0mVZKVyt7", "Eu6oLejly", "p2E2W6UGgt"], "review_url": ["https://openreview.net/forum?id=fPvWLY0LAa&noteId=vEAsB5Apd9R", "https://openreview.net/forum?id=fPvWLY0LAa&noteId=8q0mVZKVyt7", "https://openreview.net/forum?id=fPvWLY0LAa&noteId=Eu6oLejly", "https://openreview.net/forum?id=fPvWLY0LAa&noteId=p2E2W6UGgt"], "review_cdate": [1584677065344, 1584644273390, 1584155171970, 1583873762892], "review_tcdate": [1584677065344, 1584644273390, 1584155171970, 1583873762892], "review_tmdate": [1585229540909, 1585229540369, 1585229539817, 1585229539320], "review_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"]], "review_writers": [["MIDL.io/2020/Conference/Paper282/AnonReviewer1"], ["MIDL.io/2020/Conference/Paper282/AnonReviewer3"], ["MIDL.io/2020/Conference/Paper282/AnonReviewer4"], ["MIDL.io/2020/Conference/Paper282/AnonReviewer2"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["H-PvDNIex", "H-PvDNIex", "H-PvDNIex", "H-PvDNIex"], "review_content": [{"title": "Novel metrics are proposed to evaluate uncertainty estimation and applied to brain tumor segmentaiton", "review": "The authors propose some metrics based on thresholded uncertainty to evaluate the reliability of uncertainty estimation methods for deep learning-based segmentation, which is of interest to the community. Effect of these metrics on brain tumor segmentation has been shown. However, the proposed metrics failed to rank different uncertainty estimation methods as in the results.\n\npros:\n1, considering the ratio of filtered TPs and TNs is a reasonable idea for uncertainty assessment.\n2, the authors showed some results with a brain tumor segmentation task, which helped to understand the proposed metrics.\n\ncons:\n1, using Dice based on thresholded uncertainty to evaluate the uncertainty estimation method has been proposed before, such as the following paper: \n\n[1] Assessing Reliability and Challenges of Uncertainty Estimations for Medical Image Segmentation, MICCAI 2019.\n\nAuthors in [1] found that based on such a metric, model ensemble had a better performance than other uncertainty estimation methods. But this paper found that there was no obvious winner among different uncertainty estimation methods according to the metrics used in this paper. Could the authors explain more about this?\n\n2, following the above problem, the results didn't show the proposed metrics have the ability to distinguish good and poor uncertainty estimation methods. How to validate the effectiveness of the proposed metrics? \n\n", "rating": "3: Weak accept", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "Interesting to see such efforts in a short paper! ", "review": "The paper presents an evaluation of recently developed uncertainty measures on Brain Tumour Segmentation. \n\nPros: The paper is well-written and relevant to MIDL topics. Further, it introduces two additional metrics to evaluate the performance of uncertainty on a publicly available database. \n\nCons: Calibration wasn't performed and discussed here. The paper would have been even stronger if a quantitative assessment against labels uncertainty, due to intra/inter-observer variability, was performed. \n\nDetailed Feedback: \n- As you might know, predictive uncertainty is underestimated, and calibration has been recently investigated in this context, e.g. Guo et al. [1]. Some methods claimed better calibration, e.g. Deep Ensemble. So i was wondering whether reported uncertainty methods were well-calibrated on a validation set or not. It would have been better if the methods were well-calibrated first before running the evaluation, or at least the authors have discussed this point in this discussion and conclusion. \n- One of the concluding remarks that I was hoping to see is the need of novel techniques and tools that measure the labels uncertainty, similar to the work of Tomczack et al. [2]. I think this is extremely important as we need to urge researchers to look at this.\n\n[1] Guo, C., Pleiss, G., Sun, Y. and Weinberger, K.Q., 2017, August. On calibration of modern neural networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70 (pp. 1321-1330). JMLR. org.\n\n[2] Tomczack, A., Navab, N. and Albarqouni, S., 2019. Learn to estimate labels uncertainty for quality assurance. arXiv preprint arXiv:1909.08058.\n", "rating": "3: Weak accept", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}, {"title": "The authors in this paper propose a metric to assess uncertainty measures in case of brain tumor segmentation from MRI images. ", "review": "- How do these authors envision this approach will be used in clinical practice? How will a radiologist interact with such system outputs that provide uncertainty estimates?\n- Please provide more information on the modified 3D UNET utilized. Did any of the parameters change during experimentation?\n- Additional details with respect to the experimentation must be added. \n- Additional examples capturing the effectiveness of this metric must be provides. \n\n", "rating": "3: Weak accept", "confidence": "2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper"}, {"title": "Valuable, well-written contribution with a few shortcomings", "review": "Quality and clarity:\n- The short paper is well-written and easy to follow.\n\nSignificance:\n- The evaluation of uncertainty estimations in segmentation is crucial. Given the amount of existing uncertainty estimation methods, such metrics are critical to compare the produced uncertainty estimates quantitatively.\n\nPros:\n- The work addresses an important problem.\n- The proposed metric not only rewards uncertainties in the FP and FN but also penalizes the uncertainties in the TP and TN regions.\n- Figure 1 and Table 1 greatly improve the understanding of the proposed metric.\n\nCons:\n- The proposed metric is rather complicated to interpret since it consists of three sub-metrics and requires different thresholds.\n- The work neither describes how to combine the three sub-metrics, nor it explains how to combine the values at each threshold. Being able to summarize the metric into one scalar value would be beneficial for broader adoption and better interpretation.\n- The compared uncertainty estimation methods are insufficiently described or cited.\n\nMinor:\n- Typo in Table 1: The TP in the definition of the FTN should probably be a TN.\n- The work mentions inter-rater variability as ground truth uncertainty. It is arguable if the desired uncertainty of a model should be similar/identical to the inter-rater disagreement.", "rating": "3: Weak accept", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}], "comment_id": [], "comment_cdate": [], "comment_tcdate": [], "comment_tmdate": [], "comment_readers": [], "comment_writers": [], "comment_reply_content": [], "comment_content": [], "comment_replyto": [], "comment_url": [], "meta_review_cdate": 1586207990678, "meta_review_tcdate": 1586207990678, "meta_review_tmdate": 1586207990678, "meta_review_ddate ": null, "meta_review_title": "MetaReview of Paper282 by AreaChair1", "meta_review_metareview": "This paper presents a simple yet effective method to evaluate uncertainty applied to tumor segmentation problem.\n \nThis short paper is well written and the results seem relevant to MIDL. \n\n", "meta_review_readers": ["everyone"], "meta_review_writers": ["MIDL.io/2020/Conference/Program_Chairs", "MIDL.io/2020/Conference/Paper282/Area_Chairs"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=fPvWLY0LAa&noteId=kw4E3HHxmyV"], "decision": "reject"}