{"forum": "2xJCo7lgB8", "submission_url": "https://openreview.net/forum?id=-flLIL52Gq", "submission_content": {"pdf": "/pdf/01ae09c436efb7879294e037c8be2df584c2e189.pdf", "keywords": ["Multiple Sclerosis", "New and enlarging lesions", "longitudinal MRI", "Bayesian Deep Learning."], "track": "short paper", "authorids": ["nazsepah@cim.mcgill.ca", "raghav@cim.mcgill.ca", "douglas.arnold@mcgill.ca", "dprecup@cs.mcgill.ca", "arbel@cim.mcgill.ca"], "title": "Exploring Bayesian Deep Learning Uncertainty Measures for Segmentation of New Lesions in Longitudinal MRIs", "authors": ["Nazanin Mohammadi Sepahvand", "Raghav Mehta", "Douglas Lorne Arnold", "Doina Precup", "Tal Arbel"], "paper_type": "well-validated application", "abstract": "In this paper, we develop a modified U-Net architecture to accurately segment new and\nenlarging lesions in longitudinal MRI, based on multi-modal MRI inputs, as well as subtrac-\ntion images between timepoints, in the context of large-scale clinical trial data for patients\nwith Multiple Sclerosis (MS). We explore whether MC-Dropout measures of uncertainty\nlead to confident assertions when the network output is correct, and are uncertain when\nincorrect, thereby permitting their integration into clinical workflows and downstream in-\nference tasks.", "paperhash": "sepahvand|exploring_bayesian_deep_learning_uncertainty_measures_for_segmentation_of_new_lesions_in_longitudinal_mris", "_bibtex": "@misc{\nsepahvand2020exploring,\ntitle={Exploring Bayesian Deep Learning Uncertainty Measures for Segmentation of New Lesions in Longitudinal {\\{}MRI{\\}}s},\nauthor={Nazanin Mohammadi Sepahvand and Raghav Mehta and Douglas Lorne Arnold and Doina Precup and Tal Arbel},\nyear={2020},\nurl={https://openreview.net/forum?id=-flLIL52Gq}\n}"}, "submission_cdate": 1579955778883, "submission_tcdate": 1579955778883, "submission_tmdate": 1587172216513, "submission_ddate": null, "review_id": ["c05HR_w4sJ", "WjxdQrUMk", "AHWZzVfIv1", "P0ujWLgdIQ"], "review_url": ["https://openreview.net/forum?id=-flLIL52Gq¬eId=c05HR_w4sJ", "https://openreview.net/forum?id=-flLIL52Gq¬eId=WjxdQrUMk", "https://openreview.net/forum?id=-flLIL52Gq¬eId=AHWZzVfIv1", "https://openreview.net/forum?id=-flLIL52Gq¬eId=P0ujWLgdIQ"], "review_cdate": [1583916165575, 1583874495595, 1583840425678, 1582302384647], "review_tcdate": [1583916165575, 1583874495595, 1583840425678, 1582302384647], "review_tmdate": [1585229294513, 1585229294003, 1585229293503, 1585229292998], "review_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"]], "review_writers": [["MIDL.io/2020/Conference/Paper298/AnonReviewer2"], ["MIDL.io/2020/Conference/Paper298/AnonReviewer4"], ["MIDL.io/2020/Conference/Paper298/AnonReviewer1"], ["MIDL.io/2020/Conference/Paper298/AnonReviewer3"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["2xJCo7lgB8", "2xJCo7lgB8", "2xJCo7lgB8", "2xJCo7lgB8"], "review_content": [{"title": "The paper is well written and clearly motivated. The introduction excites the reader, but as the methodology commences, the paper lacks depth and justifications for different choices in the experimental setup.", "review": "Pros:\n- Well written\n- Clearly motivated\n- Quality and clarity of the presentation are great\n- Experiments are conducted on a very large dataset\n\nCons:\n- Unclarities in methodology (1): for n volumes, you end up with n-1 subtraction images. How can you multiply these elementwise with n volumes? \n- Unclarities in methodology (2): How is MC dropout applied, or where is dropout placed in the network?\n- Originality: The way the focus was put in this work forces me to question the novelty. I'd have loved to see more details and justifications on the design choices of the network input\n- Data (Minor): Why was T2 used instead of FLAIR?\n- How would one determine a threshold on uncertainty, which commonly is not between 0 and 1, on a test set for which training set uncertainties is not known?\n- What precisely is meant by \"at reference\"?\n- What exactly is the output of the 3D Unet? One segmentation volume, or 3 of them?\n- The reported metric: ROC and AUROC are not suitable for my point of view in this context. I'd assume that lesion and background pixels are heavily imbalanced, which calls for the Precision-Recall-Curve and the respective area under it.", "rating": "2: Weak reject", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}, {"title": "Well-motivated short paper with several shortcomings", "review": "Quality and clarity:\nThe short paper is well-written and easy to follow.\n\nSignificance:\nThe presented work is very similar to the work of Nair et al., 2020. The difference is mainly the modified segmentation task. Due to the similarity of the problem statement, the methods used, and the significance of the reported result (and the fact that this work is not introduced as a short paper of an existing publication), the benefit for the readership is limited.\n\nPros:\n- The work is well-motivated, and the short paper nicely introduces the problem.\n- By focusing on the assertion confidences, this work addresses a critical issue with regards to the clinical integration of DL approaches.\n\nCons:\n- The management of the available space is poor. Space limitations are mentioned as a reason not to show additional results. At the same time, a large figure of a 3D U-Net architecture is presented. The benefit of showing architecture details is minimal, especially because the essential information about the dropout locations is missing. I would prefer seeing additional results than the network architecture.\n- The benefit of the proposed approach is unclear. The work mainly shows that ignoring uncertain voxels leads to improved results. As I understand, to observe such a benefit it only requires that some FP/FN voxels express uncertainty, which should be the case for any uncertainty estimation method. The method should thus, at least, be compared to the standard softmax probability (or entropy) output of the network to assess the benefit of the proposed method.\n-In the abstract, the work claims: \u201cWe explore whether MC-Dropout measures of uncertainty lead to confident assertions when the network output is correct, and are uncertain when incorrect [...]\u201d. I do not see how the results support this claim since the evaluation only requires that the certain voxels are correct (as correctly mentioned in the conclusion).\n\nMinor:\n- In the text, the ROC is defined as TPR vs. FPR, whereas in the plot it is TPR vs. FDR.\n- Baseline (in Figure 3) is not explained. I assume it to be the absence of an uncertainty threshold.", "rating": "2: Weak reject", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "Validated Application for lesion segmentation using Baysesion deep learning ", "review": "Summary: This paper explored the uncertainty measuring in lesion segmentation task. U-Net architecture is used as backbone network. Monte-Carlo Dropout approach is used to measure the uncertainty. New or enlarging lesion is the main segmentation target, thus subtraction images are also feed to network. \n\nProps: The authors accomplished a complete NE lesion segmentation task using 3D U-Net architecture, and incorporated MC-dropout approaches to measure the uncertainty. Whole paper is well written, easy to follow. \n\nCons: \n1) This work is more like a reproducibility of (a part of) previous work (Nair, MIA, 2020). The major difference is validation dataset. Personally, I think this paper lacks novelty. The authors emphasized the segmentation task for NE lesion is challenging, but they did not give any support for this claim. \n2) Since the authors claimed \"we develop a modified U-Net\", the modified part should be well explained. I cannot find any major difference with the original U-Net architecture except for the input data.\n3) More details should be well explained in limited pages. Such as network architecture, detailed filtering. \n3) Typo and grammatical errors need to be fixed. Such as 'a test set', 'Figure 3.', \"\u2018t\u2019\", 'follow the same procedure followed by...' \n\nComments: \nI think the author can go deeper in uncertainty measuring tasks, not just changing the dataset. As claimed in the abstract, \"... thereby permitting their integration into clinical workflows and downstream inference tasks\u201c. More deeper in downstream clinical workflows would be interesting than re-validation of previous works.", "rating": "2: Weak reject", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "MCDO for the segmentation of 'New/Enlarging' lesions, well-written, promising results, but proprietary data", "review": "The paper uses MCDO for uncertainty estimation in the segmentation of 'New/Enlarging' brain lesions. it is well written and the problem statement is clear. With the use of uncertainty information, they leap from deterministic segmentation, which can be perilous in the given medical context, to a probabilistic approach. The validation is valid and the results are promising. However, in an extended version of the paper, I would love to see a more comprehensive list of methods for uncertainty estimation. MCDO is not the only one and it has its failure modes. From a decent comparison of such methods, we can learn more about the nature of this important medical problem as well as the performance of other methods on this real-life application. \n\nOne thing I can brag about is the use of a proprietary dataset. It sounds like a good collection but closed-sourceness of medical data gives me a bad taste, always. In my opinion, proprietary datasets, file formats, etc... hinder the progress overall. \n\nIn summary, the paper is of interest to the MIDL audience and could benefit from further discussions. ", "rating": "3: Weak accept", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}], "comment_id": [], "comment_cdate": [], "comment_tcdate": [], "comment_tmdate": [], "comment_readers": [], "comment_writers": [], "comment_reply_content": [], "comment_content": [], "comment_replyto": [], "comment_url": [], "meta_review_cdate": 1585261008267, "meta_review_tcdate": 1585261008267, "meta_review_tmdate": 1585261008267, "meta_review_ddate ": null, "meta_review_title": "MetaReview of Paper298 by AreaChair1", "meta_review_metareview": "While the reviewers agree that the paper is well written and that the application is relevant, they also share concerns on the novelty and presentation of this work.", "meta_review_readers": ["everyone"], "meta_review_writers": ["MIDL.io/2020/Conference/Program_Chairs", "MIDL.io/2020/Conference/Paper298/Area_Chairs"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=-flLIL52Gq¬eId=UJPdqdEvQlB"], "decision": "reject"}