{"forum": "S1lXP-nJxE", "submission_url": "https://openreview.net/forum?id=S1lXP-nJxE", "submission_content": {"title": "Training CNNs for Multimodal Glioma Segmentation with Missing MR Modalities", "authors": ["Karin van Garderen", "Marion Smits", "Stefan Klein"], "authorids": ["k.vangarderen@erasmusmc.nl", "m.smits@erasmusmc.nl", "s.klein@erasmusmc.nl"], "keywords": ["convolutional neural network", "glioma segmentation", "multimodal"], "TL;DR": "We adapted the well-known UNet architecture to produce CNNs that are robust to missing MR modalities in glioma segmentation. ", "abstract": "Missing data is a common problem in machine learning, and in retrospective imaging research it is often encountered in the form of missing imaging modalities. We propose to take into account missing modalities in the design and training of neural networks, to ensure that they are capable of providing the best possible prediction even when one of the modalities is not available. This would enable algorithms to be applied to subjects with fewer available modalities, without leaving out the same information in other subjects or applying data imputation. This concept is evaluated in the context of glioma segmentation, which is a problem that has received much attention in part due to the BraTS multi-modal segmentation challenge. The UNet architecture has been shown to be effective in this problem and therefore it serves as the reference method in this paper. To make the network robust to missing data we leveraged the dropout principle during training and applied this to the UNet architecture, but also to variations on the UNet architecture inspired by multimodal learning. These networks drastically improved the performance with missing modalities, while only performing slightly worse on the full dataset.", "pdf": "/pdf/53b0858fd167638af077e266434763363f32b46c.pdf", "code of conduct": "I have read and accept the code of conduct.", "paperhash": "garderen|training_cnns_for_multimodal_glioma_segmentation_with_missing_mr_modalities"}, "submission_cdate": 1544696155138, "submission_tcdate": 1544696155138, "submission_tmdate": 1545069835486, "submission_ddate": null, "review_id": ["B1e2BeMjQN", "Skg2MB3OXN", "rJxY0gbpX4"], "review_url": ["https://openreview.net/forum?id=S1lXP-nJxE¬eId=B1e2BeMjQN", "https://openreview.net/forum?id=S1lXP-nJxE¬eId=Skg2MB3OXN", "https://openreview.net/forum?id=S1lXP-nJxE¬eId=rJxY0gbpX4"], "review_cdate": [1548587075908, 1548432660195, 1548714193354], "review_tcdate": [1548587075908, 1548432660195, 1548714193354], "review_tmdate": [1548856739947, 1548856729936, 1548856692982], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["MIDL.io/2019/Conference/Paper43/AnonReviewer3"], ["MIDL.io/2019/Conference/Paper43/AnonReviewer2"], ["MIDL.io/2019/Conference/Paper43/AnonReviewer1"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["S1lXP-nJxE", "S1lXP-nJxE", "S1lXP-nJxE"], "review_content": [{"pros": "\nThis paper applies dropout to different UNet-based architectures during training to tackle the problem of missing modalities in the inference. The presented method was validated on the public BRATS dataset for multimodal glioma segmentation.\n\n+ The paper is well motivated and clearly written;\n+ The method is validated on one publicly available dataset;\n+ The idea of this paper is straight-forward;\n+ Studied the combination of dropout and three different network architectures.\n\n", "cons": "\n- Innovation of the paper is relatively limited. The utilization of the dropout technique and the three network architectures are not new in dealing with missing modalities and has already been used in studies published in MICCAI and TMI;\n- The paper is lack of discussion about existing works, as well as comparison with them. There are indeed some very good works towards addressing the issue of missing modalities;\n- The experimental validation is not comprehensive enough. Only the scenario of one modality missing was considered. The authors didn\u2019t report the performance when more modalities are missing. Also, as mentioned in the discussion section, the information from one MR modality may not be entirely removed in the late fusion network, which could affect the results. \n- Although dropout increases the network robustness to missing modalities, the network performance on full dataset decreases.\n- Since the ensemble and late fusion network are trained for each modality separately, do they cost four times more training time than the single UNet network? \n\n", "rating": "2: reject", "confidence": "3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}, {"pros": "This paper deals with the issue of missing MRI modalities in multimodal glioma segmentation. This issue is more commonly addressed by replacing the imaging modality with e.g. the average of the remaining images in the dataset or by synthetically generating these modalities (e.g.Bowles et al. International Workshop on Simulation and Synthesis in Medical Imaging, 2016).", "cons": "The main concerns lie in the originality/novelty of this work and the evaluation. For more details see below:\n\n- The concept of dropout has long been used to improve generalisation of models and to tackle sparse inputs.\n- It is unclear how the label fusion model works exactly. Figure 2 diagrams could be improved to reflect what the models are actually doing.\n- I understand that the authors want to avoid data imputation of any kind, but it would be useful to see a comparison of the two approaches. If data imputation is performing significantly worse, then it is worth the extra effort of generating synthetic images or computing the missing modality from remaining data.\n- In terms of evaluation, results seem a bit unstable across folds so it would be useful to show results for all folds of the cross-validation. I understand the time constraints but it doesn't serve much to the user to only see results from 2 folds.\n- The boxplots in Figure 3 only favour the ensemble approach for the FLAIR images. The advantage of using ensemble or late fusion is not clear in this case.", "rating": "2: reject", "confidence": "2: The reviewer is fairly confident that the evaluation is correct"}, {"pros": "This paper deals with brain tumor segmentation on MR images with missing modalities. The paper presents a comparative study of different U-Net based architectures to deal with missing data, comprising a standard U-Net, a U-Net with dropout in the input layer, an ensemble approach and a late fusion approach.\n\nThe paper is well written, easy to follow and validated in a well known dataset (BRATS). The authors tackle a challenging task that is still an open problem for the MIC community. The methodology is fairly simple and seems to have a significant impact in the results.", "cons": "- My main concern with this work is in terms of lack of mention and comparison with really similar approaches in the literature, like that of https://arxiv.org/pdf/1607.05194.pdf This paper tackles exactly the same problem and proposes a fairly similar strategy, but is not even mentioned in the manuscript. Authors should at least discuss the differences with this work.\n\n- The authors only validated the proposed approach for a single missing modality and for a binary segmentation scenario. Given that BRATS provides 4 modalities and mutli-label annotations, why not validating with more missing modalities and in the context of multi-label segmentation? This would make a more solid validation of the proposed architectures. Moreover, it would be interesting to see how the proposed methods perform in the absence of multiple modalities.\n\n", "rating": "3: accept", "confidence": "2: The reviewer is fairly confident that the evaluation is correct"}], "comment_id": ["B1eiNiR54V", "HyxowhAc4N", "Syg8I6CcEE"], "comment_cdate": [1549622067214, 1549622371369, 1549622606110], "comment_tcdate": [1549622067214, 1549622371369, 1549622606110], "comment_tmdate": [1555946013758, 1555946013503, 1555946013251], "comment_readers": [["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["MIDL.io/2019/Conference/Paper43/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper43/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper43/Authors", "MIDL.io/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Rebuttal - related literature and further validation", "comment": "We would like to thank the reviewer for the comments and suggestions.\n\n- With respect to the related literature, the reviewer has a good point. We would like to address our contribution with respect to related literature here and in the final version, and we thank the reviewer for the suggestion of a similar paper. The main contribution of our work with respect to the HeMIS approach is the comparison with the standard (UNet) approach and the UNet with dropout on the input layer. By comparing the ensemble and fusion network with those, which are similar in all aspects besides the architecture, we are able to evaluate not only the added benefit of a multi-pathway architecture, but also the potential decrease in performance on the full dataset. \nAlthough the approach of HeMis, but also van Tulder et al. (https://doi.org/10.1109/TMI.2018.2868977), can be considered to be more sophisticated through the use of a specific cross-modality representation layer, \nthey do not evaluate the possible loss of information when combining modalities at a later stage of the network. \nThe most common (state of the art) approach to multi-modal segmentation (in the Brats 2018 challenge) is to combine the modalities as channels at the input, so it is not unlikely that a multi-pathway architecture induces some loss of performance. \nOur experiments show that this is indeed the case, although the loss in performance is minor. Also, the UNet Dropout network shows that the mere use of dropout already leads to an improved performance with missing sequences, although the multi-pathway approach does perform better.\n\n- Thank you for the suggestion to include experiments with multiple missing modalities. In response to this comment, we have performed the evaluation with multiple missing modalities. The results are shown below and will be included in the final version. The evaluation with a multi-label segmentation would also be interesting, as the different labels are also based on different modalities, but we would leave this evaluation for future work.\n\nFull cross-validation results for full dataset and single missing sequence:\nModel | Full | No T1 | No T2 | No T1Gd | No Flair||\nUNet | 0.87 | 0.77 | 0.69 | 0.81 | 0.53 ||\nUNet Dropout | 0.85 | 0.83 | 0.83 | 0.85 | 0.75 ||\nFusion | 0.85 | 0.85 | 0.82 | 0.85 | 0.78 ||\nEnsemble | 0.84 | 0.85 | 0.83 | 0.84 | 0.79 ||\n\nResults for two missing sequences (three folds, though a full cross-validation will also be performed for final version):\nModel | t1Gd flair | t1Gd t2 | t1 flair | t1 t1Gd | t1 t2 | t2 flair||\nUNet | 0.56 | 0.61 | 0.65 | 0.77 | 0.49 | 0.20 ||\nUNet Dropout | 0.74 | 0.79 | 0.74 | 0.84 | 0.65 | 0.55 ||\nFusion | 0.84 | 0.85 | 0.84 | 0.87 | 0.85 | 0.71 ||\nEnsemble | 0.84 | 0.86 | 0.84 | 0.88 | 0.86 | 0.80 ||\nResults for three missing sequences (three folds, though a full cross-validation will also be performed for final version):\nModel | t1Gd t2 flair | t1 t1Gd flair | t1 t1Gd t2 | t1 t2 flair||\nUNet | 0.03 | 0.57 | 0.37 | 0.18 ||\nUNet Dropout | 0.20 | 0.45 | 0.60 | 0.19 ||\nFusion | 0.64 | 0.81 | 0.84 | 0.73 ||\nEnsemble | 0.72 | 0.83 | 0.84 | 0.74 ||\n"}, {"title": "Rebuttal", "comment": "We thank the reviewer for the comments.\n\n- We would like to address the lack of innovation and discussion about existing works together, as we believe they are strongly related. The reviewer has a good point that we have not made our contribution with respect to existing literature clear, so we would like to elaborate on this here and in the final version.\n\nAlthough the concept of using dropout for missing data is not new, our contribution is in the systematic and gradual comparison of the baseline UNet, the addition of dropout and the multi-pathway architectures. \nTo our knowledge, such an assessment has not been published before. Similar approaches to the Ensemble architecture have been published (https://doi.org/10.1109/TMI.2018.2868977, https://arxiv.org/pdf/1607.05194.pdf), \nbut in these studies the effect of a multi-pathway approach is not compared to a standard single-pathway network, even though that is the most common (state of the art) approach to multimodal segmentation.\nOur experiments show not only the clear benefit of the multi-pathway networks with respect to the baseline and the mere addition of dropout, but also the (minor) loss of performance on a full dataset.\n\n- The experimental validation has been extended to multiple missing modalities, and the results provide an even stronger case for the multi-pathway architectures. These are included below and will be added to the final version for a full cross-validation. To address the fusion network: indeed, the sequences may not be removed completely by dropout during training. We would like to stress that they are removed completely at all times during evaluation, so the results are representative for the performance with missing data.\n\n- Indeed, the network performance is decreased slightly for the full dataset. We would like to argue that this is an important result, as it shows that the benefit of a good performance with missing sequences comes with a small price for full datasets. There is no free lunch here.\n\n- To address training times for the ensemble and late fusion network: yes, they do take a bit longer to train. However, training a single pathway requires less time and much less memory than training a full network, as they are smaller and are trained for fewer epochs. \n\nFull crossvalidation results (mean dice score):\nModel | Full | No T1 | No T2 | No T1Gd | No Flair||\nUNet | 0.87 | 0.77 | 0.69 | 0.81 | 0.53 ||\nUNet Dropout | 0.85 | 0.83 | 0.83 | 0.85 | 0.75 ||\nFusion | 0.85 | 0.85 | 0.82 | 0.85 | 0.78 ||\nEnsemble | 0.84 | 0.85 | 0.83 | 0.84 | 0.79 ||\n\nResults for two missing sequences (three folds, though a full cross-validation will also be performed for final version):\nModel | t1Gd flair | t1Gd t2 | t1 flair | t1 t1Gd | t1 t2 | t2 flair||\nUNet | 0.56 | 0.61 | 0.65 | 0.77 | 0.49 | 0.20 ||\nUNet Dropout | 0.74 | 0.79 | 0.74 | 0.84 | 0.65 | 0.55 ||\nFusion | 0.84 | 0.85 | 0.84 | 0.87 | 0.85 | 0.71 ||\nEnsemble | 0.84 | 0.86 | 0.84 | 0.88 | 0.86 | 0.80 ||\nResults for three missing sequences (three folds, though a full cross-validation will also be performed for final version):\nModel | t1Gd t2 flair | t1 t1Gd flair | t1 t1Gd t2 | t1 t2 flair||\nUNet | 0.03 | 0.57 | 0.37 | 0.18 ||\nUNet Dropout | 0.20 | 0.45 | 0.60 | 0.19 ||\nFusion | 0.64 | 0.81 | 0.84 | 0.73 ||\nEnsemble | 0.72 | 0.83 | 0.84 | 0.74 ||\n"}, {"title": "Rebuttal", "comment": "Thank you for the helpful suggestions.\n\n- We agree with the reviewer that we should do a better job at stating the contribution of this work with respect to existing literature. The use of dropout in itself is indeed not innovative, \nbut we believe that the added benefit of this paper with respect to existing similar approaches (https://doi.org/10.1109/TMI.2018.2868977, https://arxiv.org/pdf/1607.05194.pdf) is in the comparison of a multi-pathway architecture to the standard approach (UNet) and the mere use of dropout. Our results show that although a big performance gain can already be achieved with only the use of dropout, the multi-pathway architectures do give an additional benefit when dealing with missing sequences. Also, we show that this performance gain comes with a small price of reduced performance on the full dataset.\n\n- Thank you for indicating that the diagram is unclear. We will improve the figure for a final version. \nWhereas that ensemble network concatenates the pathways at the classification layer, leading to two features per pathway (or sequence), the fusion network concatenates the pathways one step earlier at the last convolutional layer. \nThis layer has 2c (32) feature maps per pathway, which are concatenated to 8c (128) feature maps. From these 8c feature maps, a 1x1x1 convolution leads directly to the two class probabilities per voxel. Applying dropout to the 8c features of the fusion layer, instead of the 8 class probabilities of the ensemble layer, means that some information from each sequence will probably always survive during training. \n\n- We agree that it would be interesting to compare this approach to data imputation. However we also believe that, for fair comparison, we would have to use a state-of-the-art synthesis method. This is not feasible for us to include in this paper, but we will keep it in mind for future work. Thank you for the suggestion.\n\n- Indeed, we have in the mean time finished the full cross-validation. Results are shown below, including the results for multiple missing sequences, which is not yet finished but will be concluded for the final version.\n\n- The reviewer indicates that the added benefit of these methods is mostly in the case of missing FLAIR images. This is indeed the case, indicating that the UNet model learns mostly from the FLAIR image while the other learn to incorporate other sequences in their prediction. We would like to stress that the FLAIR sequence is very often missing in our experience, so this is a situation that occurs more often than (for example) a missing T1W image.\nMoreover, it is worth noting that in case of more than one missing modality (experiments added in response to Reviewer 1; results shown below), there are several other scenarios where the Ensemble and Fusion approaches outperform UNet and UNet-Dropout. \n\nFull crossvalidation results (mean dice score across patients):\nModel | Full | No T1 | No T2 | No T1Gd | No Flair||\nUNet | 0.87 | 0.77 | 0.69 | 0.81 | 0.53 ||\nUNet Dropout | 0.85 | 0.83 | 0.83 | 0.85 | 0.75 ||\nFusion | 0.85 | 0.85 | 0.82 | 0.85 | 0.78 ||\nEnsemble | 0.84 | 0.85 | 0.83 | 0.84 | 0.79 ||\n\nResults for two missing sequences (three folds, though a full cross-validation will also be performed for final version):\nModel | t1Gd flair | t1Gd t2 | t1 flair | t1 t1Gd | t1 t2 | t2 flair||\nUNet | 0.56 | 0.61 | 0.65 | 0.77 | 0.49 | 0.20 ||\nUNet Dropout | 0.74 | 0.79 | 0.74 | 0.84 | 0.65 | 0.55 ||\nFusion | 0.84 | 0.85 | 0.84 | 0.87 | 0.85 | 0.71 ||\nEnsemble | 0.84 | 0.86 | 0.84 | 0.88 | 0.86 | 0.80 ||\nResults for three missing sequences (three folds, though a full cross-validation will also be performed for final version):\nModel | t1Gd t2 flair | t1 t1Gd flair | t1 t1Gd t2 | t1 t2 flair||\nUNet | 0.03 | 0.57 | 0.37 | 0.18 ||\nUNet Dropout | 0.20 | 0.45 | 0.60 | 0.19 ||\nFusion | 0.64 | 0.81 | 0.84 | 0.73 ||\nEnsemble | 0.72 | 0.83 | 0.84 | 0.74 ||\n"}], "comment_replyto": ["rJxY0gbpX4", "B1e2BeMjQN", "Skg2MB3OXN"], "comment_url": ["https://openreview.net/forum?id=S1lXP-nJxE¬eId=B1eiNiR54V", "https://openreview.net/forum?id=S1lXP-nJxE¬eId=HyxowhAc4N", "https://openreview.net/forum?id=S1lXP-nJxE¬eId=Syg8I6CcEE"], "meta_review_cdate": 1551356614027, "meta_review_tcdate": 1551356614027, "meta_review_tmdate": 1551703161826, "meta_review_ddate ": null, "meta_review_title": "Acceptance Decision", "meta_review_metareview": "The paper addresses the problem of brain tumor segmentation when image modalities are missing during inference - a common problem when details of imaging protocols differ in between centers. \n\nThe reviewers acknowledge the relevance of the problem, and that the paper is well written. They question the novelty of the work, though, and seem to expect a deeper empirical analysis to credit this study for its evaluation alone. The authors offer some of this added information in the rebuttal, presenting performances for scenarios where two or three modalities are missing. Two of the reviewers recommend to reject the paper, one is positive about it.\n\nI side with the first two. I feel the results presented in the rebuttal are remarkable (with T2, FLAIR, T1gad missing, the presented approach reaches almost perfect scores, while standard Unets fail), but I would agree with R1 that more than just one segmentation task should be looked at: the \"whole tumor\" segmentation the authors choose is fairly simple, it might have been insightful to also learn about the \"tumor core\" and \"active tumor\" tasks, or to see results for a different data set. Once the authors are able to show consistent results on a slightly larger set of tasks/applications and in comparison to a few other baseline methods, I feel this will be a very valid study. ", "meta_review_readers": ["everyone"], "meta_review_writers": ["MIDL.io/2019/Conference"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=S1lXP-nJxE¬eId=Syx0TfIHIV"], "decision": "Reject"}