File size: 22,930 Bytes
fad35ef
1
{"forum": "Byg-krBi14", "submission_url": "https://openreview.net/forum?id=Byg-krBi14", "submission_content": {"code of conduct": "I have read and accept the code of conduct.", "paperhash": "hashemi|exclusive_independent_probability_estimation_using_deep_3d_fully_convolutional_densenets_application_to_isointense_infant_brain_mri_segmentation", "title": "Exclusive Independent Probability Estimation using Deep 3D Fully Convolutional DenseNets: Application to IsoIntense Infant Brain MRI Segmentation", "abstract": "The most recent fast and accurate image segmentation methods are built upon fully convolutional  deep  neural  networks. In  particular,  densely  connected  convolutional  neural networks  (DenseNets)  have  shown  excellent  performance  in  detection  and  segmentation tasks.  In this paper,  we propose new deep learning strategies for DenseNets to improve segmenting images with subtle differences in intensity values and features.  In particular, we aim to segment brain tissue on infant brain MRI at about 6 months of age where white matter and gray matter of the developing brain show similar T1 and T2 relaxation times,thus appear to have similar intensity values on both T1- and T2-weighted MRI scans. Brain tissue segmentation at this age is, therefore, very challenging.  To this end, we propose an exclusive multi-label training strategy to segment the mutually exclusive brain tissues with similarity loss functions that automatically balance the training based on class prevalence. Using our proposed training strategy based on similarity loss functions and patch prediction fusion we decrease the number of parameters in the network, reduce the complexity of the training process focusing the attention on less number of tasks, while mitigating the effects of data imbalance between labels and inaccuracies near patch borders.  By taking advantage of these strategies we were able to perform fast image segmentation (less than 90 seconds per 3D volume), using a network with less parameters than many state-of-the-artnetworks (1.4 million parameters), overcoming issues such as 3D vs 2D training and large vs small patch size selection, while achieving the top performance in segmenting brain tissue among all methods tested in first and second round submissions of the isointense infant brain MRI segmentation  (iSeg)  challenge  according  to  the  official  challenge  test  results. Our proposed strategy improves the training process through balanced training and by reducing its complexity while providing a trained model that works for any size input image, and is fast and more accurate than many state-of-the-art methods.", "remove if rejected": "(optional) Remove submission if paper is rejected.", "TL;DR": "Exclusive Independent Probability Estimation using Deep 3D Fully Convolutional DenseNets: Application to IsoIntense Infant Brain MRI Segmentation", "authorids": ["hashemi.s@husky.neu.edu", "sanjay.prabhu@childrens.harvard.edu", "simon.warfield@childrens.harvard.edu", "ali.gholipour@childrens.harvard.edu"], "authors": ["Seyed Raein Hashemi", "Sanjay P. Prabhu", "Simon K. Warfield", "Ali Gholipour"], "keywords": ["Deep learning", "Convolutional Neural Network", "FC-DenseNet", "Segmentation"], "pdf": "/pdf/db94cdf670bc716b7901c67e7f54338e01cff933.pdf", "_bibtex": "@inproceedings{hashemi:MIDLFull2019a,\ntitle={Exclusive Independent Probability Estimation using Deep 3D Fully Convolutional DenseNets: Application to IsoIntense Infant Brain {\\{}MRI{\\}} Segmentation},\nauthor={Hashemi, Seyed Raein and Prabhu, Sanjay P. and Warfield, Simon K. and Gholipour, Ali},\nbooktitle={International Conference on Medical Imaging with Deep Learning -- Full Paper Track},\naddress={London, United Kingdom},\nyear={2019},\nmonth={08--10 Jul},\nurl={https://openreview.net/forum?id=Byg-krBi14},\nabstract={The most recent fast and accurate image segmentation methods are built upon fully convolutional  deep  neural  networks. In  particular,  densely  connected  convolutional  neural networks  (DenseNets)  have  shown  excellent  performance  in  detection  and  segmentation tasks.  In this paper,  we propose new deep learning strategies for DenseNets to improve segmenting images with subtle differences in intensity values and features.  In particular, we aim to segment brain tissue on infant brain MRI at about 6 months of age where white matter and gray matter of the developing brain show similar T1 and T2 relaxation times,thus appear to have similar intensity values on both T1- and T2-weighted MRI scans. Brain tissue segmentation at this age is, therefore, very challenging.  To this end, we propose an exclusive multi-label training strategy to segment the mutually exclusive brain tissues with similarity loss functions that automatically balance the training based on class prevalence. Using our proposed training strategy based on similarity loss functions and patch prediction fusion we decrease the number of parameters in the network, reduce the complexity of the training process focusing the attention on less number of tasks, while mitigating the effects of data imbalance between labels and inaccuracies near patch borders.  By taking advantage of these strategies we were able to perform fast image segmentation (less than 90 seconds per 3D volume), using a network with less parameters than many state-of-the-artnetworks (1.4 million parameters), overcoming issues such as 3D vs 2D training and large vs small patch size selection, while achieving the top performance in segmenting brain tissue among all methods tested in first and second round submissions of the isointense infant brain MRI segmentation  (iSeg)  challenge  according  to  the  official  challenge  test  results. Our proposed strategy improves the training process through balanced training and by reducing its complexity while providing a trained model that works for any size input image, and is fast and more accurate than many state-of-the-art methods.},\n}"}, "submission_cdate": 1544406232704, "submission_tcdate": 1544406232704, "submission_tmdate": 1561397318351, "submission_ddate": null, "review_id": ["HkgcMj-IQV", "B1ekIruoXN", "BkgKkaE5mE"], "review_url": ["https://openreview.net/forum?id=Byg-krBi14&noteId=HkgcMj-IQV", "https://openreview.net/forum?id=Byg-krBi14&noteId=B1ekIruoXN", "https://openreview.net/forum?id=Byg-krBi14&noteId=BkgKkaE5mE"], "review_cdate": [1548258066027, 1548612934529, 1548532961206], "review_tcdate": [1548258066027, 1548612934529, 1548532961206], "review_tmdate": [1550002901046, 1548856743936, 1548856736687], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["MIDL.io/2019/Conference/Paper4/AnonReviewer2"], ["MIDL.io/2019/Conference/Paper4/AnonReviewer1"], ["MIDL.io/2019/Conference/Paper4/AnonReviewer3"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["Byg-krBi14", "Byg-krBi14", "Byg-krBi14"], "review_content": [{"pros": "This paper applies a densenet with less number of parameters for segmenting the infant brain at isointense stage.\n\n1. This paper is easy to understand and well formulated.\n2. The proposed method is validated on a public dataset (iseg).\n3. The proposed method achieves very good results.\n", "cons": "However, it needs to be improved in the following aspects:\n\n1. Though the authors claim they use less number of parameters, I cannot see the strategies to make it (using more hyper connection is actually quite trivial and cannot be used as novelty in my understanding; excluding label can indeed reduce the number of parameters? I doubt it). The authors should better list the number of parameters of all the compared networks, and the number of parameters if using and not using the proposed training strategy. Then we can see what happens.\n2. In addition, the number of parameters cannot represent how hard the network to be trained. Since we are not sure the freedom degrees of the network. Of course, this is only my personal understanding.\n3. I cannot learn much from this paper. The authors can point out what's the contributions.\n4. For the experimental part, I'd like to see some ablation study to validate whether the proposed training strategy indeed works or the good performance is coming from excellent hyper-parameter tuning.\n5. \"The GM labels were concluded from the compliment of the already predicted CSF and WM labels.\" This one is useful, but not actually new, I have read papers to use similar strategies to conduct infant segmentation to segment CSF vs. WM+GM and then WM vs. GM before. More importantly, the proposed strategy is not quite general, you can do the compliment to get the GM because the background for brain MRI is usually 0. In other applications, the background is usually not 0.\n\nIf the authors solves the above concerns, I'll choose to accept it.\n", "rating": "3: accept", "confidence": "3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}, {"pros": "The paper presents a dense 3D-FCNN for segmenting multi-modal infant brain MRI. The main contribution is a modified training strategy which optimizes the prediction of tissue classes separately with sigmoids, instead employing a traditional softmax function. This enables a finer control of the precision-recall tradeoff, via a custom F-beta loss. \n\n- The paper proposes a somewhat novel strategy for dealing with high-overlapping classes, for which the trade-off between precision and recall for each class has an significant impact on performance.  \n\n- Authors report state-of-art performance of the challenging iSeg dataset, where different class regions exhibit low contrast. Statistically significant improvement is also obtained compared to the traditional single-label (i.e., softmax) approach.\n\n- The paper is well written and easy to follow. In particular, authors did a good job motivating the problem and their proposed method. The method and experiments are clearly described and could be reproduced fairly easily.\n", "cons": "- Contributions w.r.t. to existing work are not entirely clear. The proposed loss is similar in terms goal to the Generalized Dice Loss (Sudre et al., 2017 -- see bottom ref.), where the precision/recall importance of each class is weighted by its size. Moreover, the strategy of processing 3D images in separate patches (both in training and testing) is actually implemented in several segmentation methods, for instance DeepMedic and HyperDenseNet. In fact, this random region crop strategy is fairly standard when training deep segmentation networks from large images.     \n\n- The proposed method seems to be tailored to this specific dataset (iSeg), i.e., three classes, two of them having a large overlap. A stronger validation could have been achieved by testing the proposed method on other brain MRI segmentation datasets (e.g., MRBrains), or problems where class imbalance is more pronounced (e.g., brain legion segmentation).      \n\nMinor comments:\n\n- The proposed architecture merges modalities in the first layer, however recent studies have shown that later fusion could lead to better performance (e.g., Dolz et al., 2018). Perhaps authors could motivate this architecture choice.    \n\n- \"calculating sigmoid is less computationally cumbersome for a processing unit compared to softmax especially for large number of labels.\": I doubt this makes a real difference in computation time.\n\n- \"... our 3D FC-DenseNet architecture which is deeper than previous DenseNets with more skip-layer connections and less number of parameters\": This is a bit misleading. For instance, HyperDenseNet introduces skip connections across all layers and all paths (one per modality), therefore has a maximum number of skip connections for a equivalent number of layers.\n\nCarole H Sudre, Wenqi Li, Tom Vercauteren, Sebastien Ourselin, and M Jorge Cardoso. Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, pages 240\u2013248. Springer, 2017.", "rating": "3: accept", "confidence": "3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}, {"pros": "Nice novel fCNN DenseNet strategy to improve segmentations in data with heterogenous appearance and features, such as in infant MRI data at iso-intense stage (images that are particularly difficult to segment). The strategy employs a novel multi-label, multi class classification layer, a novel similarity loss function (that allwos balancuing of precision vs recall) and patch prediction fusion This reduces complexity and increases speed. New methods achieves highest performance in challenge dataset, with surprisingly high dice accuracy and slower surface distances.", "cons": "Novelty is somewhat iterative. \nWhile computational efficiency/speed is improved, that's not really an issue for this type of segmentation ", "rating": "3: accept", "confidence": "3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature", "oral_presentation": ["Consider for oral presentation"]}], "comment_id": ["SkgTTRH5EE", "B1eNG18qVN", "rylJpkI5EE"], "comment_cdate": [1549586117002, 1549586188359, 1549586359106], "comment_tcdate": [1549586117002, 1549586188359, 1549586359106], "comment_tmdate": [1555946017087, 1555946016869, 1555946016649], "comment_readers": [["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["MIDL.io/2019/Conference/Paper4/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper4/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper4/Authors", "MIDL.io/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Thank you for your invaluable and insightful comments and suggestions.", "comment": "Regarding the contributions (last paragraph of page 2), we note that our main contribution here is the exclusive multi-label multi-class training strategy for segmenting classes with highly overlapping features; and the loss function and patch fusion are our second and third contributions that integrated well with our first, main contribution. We agree that the goal of our loss function is a bit similar to that of Generalized Dice Loss. However, while we balance the exact precision and recall based on each class weight, GDL balances the product (precision * recall) and sum (precision + recall) of precision and recall based on class weights. The formulations are different, but we agree that the two loss functions should be compared carefully in a multi-class segmentation scenario in an extension of this work. Regarding the use of large patches, our specific contributions involve patch augmentation and patch fusion based on 2nd order B-Spline kernel that together result in weighted soft voting of 32 predictions per voxel.\n\nWe agree with our respected reviewer that validation of our loss function in other multi-label applications (e.g. MRBrainS) or highly-imbalanced applications (brain lesion) is interesting and can show its comparative effectiveness, but as mentioned in our response to the first point, highlighted throughout the paper, and expressed by the title of the paper, the main focus and contribution of this work is a scenario where two or more classes are hardly distinguishable based on image features. In fact our goal was to improve segmentation of the very challenging iSeg data with isointense GM and WM. Because MRBrainS and brain lesions do not fulfill one of our criteria, validating based on them would be a deviation from the main goal of this work; but these are part of our ongoing research on improved training strategies.  \n\nRegarding late fusion, in our experiments we did not see any gain in performance through late fusion. This could be because of our contributions that helped us achieve a high accuracy, beyond which any further improvements are more difficult as we get closer to hit performance limits due to aleatoric uncertainty.\nWe agree that the computation time differences of sigmoid vs softmax may not be quite substantial especially in this scenario because of the small number of class labels and GPU computing. However, as the number of isointense classes grows, the computation time difference will grow exponentially, and this can make a difference when dealing with big data in low resource settings with CPUs.\n\nWe agree that our number of skip connections for an \u201cequivalent number of layers\u201d is not more than HyperDenseNet, however since our network is much deeper (a greater number of layers) the total number of skip connections is eventually greater than HyperDenseNet. We agree that our sentence was a little confusing the way it was written, and will revise it to clarify."}, {"title": "Thank you for your invaluable and insightful comments and suggestions.", "comment": "We thank the reviewer for the excellent summary of the work and its contributions. We agree that the efficiency and speed differences may not be substantial especially in this scenario because of the small number of class labels and using GPU training. While this was only part of the secondary outcomes or contribution of our approach, we note that as the number of isointense classes and modalities grow, the computational burden grows exponentially and may make a difference when dealing with very large data in low-resource settings."}, {"title": "Thank you for your invaluable and insightful comments and suggestions.", "comment": "1.\tIn the last page of our paper in Appendix C, we have reported the depth and number of parameters of our networks (with and without the proposed training method) compared to 3D Unet, DenseVoxNet and DenseSeg which is also a 3D implementation of DenseNet.\n\n2.\tWe absolutely agree that the number of parameters cannot solely represent the complexity of the training process, however when there are less labels to train the network we expected to achieve improved training, and in fact this was confirmed by the results of our ablation study (Table S1 in Appendix C). We agree that from our sentences it may be inferred that we are attributing improved performance to the smaller number of parameters.  This was not our intention and will be clarified in the final version.\n\n3.\tOur contributions as mentioned in the last paragraph of page 2 are:\n\n   a.\tAn exclusive multi-label multi-class training approach (through independent probability estimation) using automatically-adjusted similarity loss functions per class for classes that highly overlap in features (e.g. isointensity gray matter and white matter in iSeg).\n   b.\tUtilizing a 3D FC-DenseNet architecture that is deeper, has more skip connections and has less parameters than networks in previous studies.\n   c.\tTraining and testing on large overlapping 3D image patches with a 3D patch prediction fusion strategy that enabled intrinsic data augmentation and improved segmentation in patch borders through a soft-weighted voting patch prediction fusion based on second-order B-Spline kernels.\n\n4.\tThe original purpose of this paper was to compare multi-label vs. single-label training approaches on an isointense dataset (as was mentioned in the last paragraph of page 2 as our first and main contribution). An ablation study for this contribution was indeed performed and results reported in Table S1 in Appendix C. This study, on validation sets, indicates that if we had not used our proposed training strategy, our network would not have been trained as effectively as it is, to reach the 1st rank in the iSeg challenge. With ~1.1% gain in average Dice score on WM, this ablation study shows that the gain in performance is not because of excellent hyper-parameter tuning.\n\n5.\tThe previous studies, that we are aware of, segment the CSF label first, remove it completely from the input, and then train on WM and GM separately. That is not what we did here. In simple words, we excluded one of the isointense labels (the GM ground truth label) only (not part of the training data) and trained on the others (e.g., CSF+WM) together in a single process. As for the generalizability of the method, we agree with the reviewer that the complimenting process was more feasible because of the background being labeled 0; however, while this was only a practical assumption that was true in our specific application, our training strategy can be used in other similar scenarios with similar assumptions. As for brain tissue segmentation, for example, there are highly-accurate techniques now (some based on deep networks) that remove the skull, dura matter, etc. in addition to the background to let segmentation networks focus on their challenges. Cascade or mixture of network solutions have actually been widely proposed and used in the past and have shown quite competitive results; but we are not arguing that multi-task networks are necessarily less capable. We agree with the reviewer that these concerns and practical aspects remain to be studied in more detail. Based on our formulation and analysis, we argue that for hardly-distinguishable classes (e.g., isointense MR images), exclusive independent probability estimation with a multi-label multi-class approach along with our training strategy developed for a deep densely connected network can lead to the best performance, and we achieved that on the open iSeg challenge."}], "comment_replyto": ["B1ekIruoXN", "BkgKkaE5mE", "HkgcMj-IQV"], "comment_url": ["https://openreview.net/forum?id=Byg-krBi14&noteId=SkgTTRH5EE", "https://openreview.net/forum?id=Byg-krBi14&noteId=B1eNG18qVN", "https://openreview.net/forum?id=Byg-krBi14&noteId=rylJpkI5EE"], "meta_review_cdate": 1551356585847, "meta_review_tcdate": 1551356585847, "meta_review_tmdate": 1551881976205, "meta_review_ddate ": null, "meta_review_title": "Acceptance Decision", "meta_review_metareview": "The paper proposes an exclusive multi-label multi-class training approach (through independent probability estimation) using automatically-adjusted similarity loss functions per class for classes that highly overlap in features. This overlap characterizes structural MRI with isointense white and gray matter tissues in 6-month old infants.\nThe authors addressed reviewers' major comments and an extensive comparison with segmentation deep learning architectures was performed including 3D Unet, DenseVoxNet and DenseSeg. ", "meta_review_readers": ["everyone"], "meta_review_writers": ["MIDL.io/2019/Conference"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=Byg-krBi14&noteId=BJef2MLSL4"], "decision": "Accept"}