File size: 9,609 Bytes
fad35ef
1
{"forum": "3i6X1618wi", "submission_url": "https://openreview.net/forum?id=3IetMYx3GG", "submission_content": {"keywords": ["Brain Tumor Segmentation", "Brain lesion segmentation", "Transfer Learning", "Variational Inference", "Bayesian Neural Networks", "Variational Autoencoder", "3D CNN"], "TL;DR": "Transfer learning for DNN based segmentation between illnesses by learning generative prior in conv-filter space is better than pretrain.", "track": "short paper", "authorids": ["a.kuzina@skoltech.ru", "egorov.evgenyy@ya.ru", "e.burnaev@skoltech.ru"], "title": "Bayesian Generative Models for Knowledge Transfer in MRI Semantic Segmentation Problems", "authors": ["Anna Kuzina", "Evgenii Egorov", "Evgeny Burnaev"], "paper_type": "methodological development", "abstract": "Automatic segmentation methods based on deep learning have recently demonstrated state-of-the-art performance, outperforming the ordinary methods. Nevertheless, these methods are inapplicable for small datasets, which are very common in medical problems. To this end, we propose a knowledge transfer method between diseases via the Generative Bayesian Prior network. Our approach is compared to a pre-train approach and random initialization and obtains the best results in terms of Dice Similarity Coefficient metric for the small subsets of the Brain Tumor Segmentation 2018 database (BRATS2018).", "paperhash": "kuzina|bayesian_generative_models_for_knowledge_transfer_in_mri_semantic_segmentation_problems", "pdf": "/pdf/38c55fda20035f25f87faac6162350ff3043333d.pdf", "_bibtex": "@inproceedings{\nkuzina2020bayesian,\ntitle={Bayesian Generative Models for Knowledge Transfer in {\\{}MRI{\\}} Semantic Segmentation Problems},\nauthor={Anna Kuzina and Evgenii Egorov and Evgeny Burnaev},\nbooktitle={Medical Imaging with Deep Learning},\nyear={2020},\nurl={https://openreview.net/forum?id=3IetMYx3GG}\n}"}, "submission_cdate": 1579955756287, "submission_tcdate": 1579955756287, "submission_tmdate": 1587172213658, "submission_ddate": null, "review_id": ["TiVyJifRi", "az2mw1BsZH", "anKOcY2GdO", "INPiNBJw2"], "review_url": ["https://openreview.net/forum?id=3IetMYx3GG&noteId=TiVyJifRi", "https://openreview.net/forum?id=3IetMYx3GG&noteId=az2mw1BsZH", "https://openreview.net/forum?id=3IetMYx3GG&noteId=anKOcY2GdO", "https://openreview.net/forum?id=3IetMYx3GG&noteId=INPiNBJw2"], "review_cdate": [1584221833882, 1584200696526, 1584137987082, 1583755335442], "review_tcdate": [1584221833882, 1584200696526, 1584137987082, 1583755335442], "review_tmdate": [1585229333991, 1585229333472, 1585229332974, 1585229332469], "review_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"]], "review_writers": [["MIDL.io/2020/Conference/Paper256/AnonReviewer4"], ["MIDL.io/2020/Conference/Paper256/AnonReviewer1"], ["MIDL.io/2020/Conference/Paper256/AnonReviewer3"], ["MIDL.io/2020/Conference/Paper256/AnonReviewer2"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["3i6X1618wi", "3i6X1618wi", "3i6X1618wi", "3i6X1618wi"], "review_content": [{"title": "The paper proposes a way of training deep segmentation networks on small medical datasets by learning a prior distribution on convolution kernels", "review": "The idea of learning prior to distribution on convolution kernels is methodologically sound and appealing. This new way of transfer learning could potentially be more effective than fine-tuning and L2 regularization (which basically is a zero-mean Gaussian prior). The preliminary results are reasonable.\n\nNow the authors should think about how to further extend and validate this work in the following two aspects:\n\n1. how can the generative power of VAE be used in the segmentation model? Can you learn a family of DNNs to improve segmentation or quantify uncertainty?\n\n2. how does the prior compare to other standard regularization approaches?\n", "rating": "4: Strong accept", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "Learning prior distribution of CNN weights as a pretraining", "review": "The authors present a method to pre-train deep neural architectures for the purpose of medical image segmentation in the situation of small training datasets. This method is based on learning prior distribution of the CNN weights based on a generative model referred to as deep weight prior (DWP) proposed by Atanov et al. The authors propose to learn the kernel distribution from a source dataset consisting of MRI of multiple sclerosis (MS) patients and apply it to the task of segmenting brain tumors from the BRATS18 MRI dataset considered as the target domain. UNet is used as the backbone architecture.\nThe proposed method is compared to three baseline methods, namely a model directly trained on the low sample target data (BRATS 18) based on standard random initialization (UNet-RI), a model whose weights are pre-trained on the MS Datasets (UNet-PR), and the UNet-PR model fine-tuned on the BRATS18 dataset (UNet-Prf).\nResults based on the intersection over union metric indicate that the model performs better that UNet-PR and UNet-PRf but comparably with UNet-RI.\nI have one major concern regarding the validity of the hypothesis grounding the DWP method proposed by Atanov et al. The authors indeed assume that the source and target kernels (networks weights) are drawn from the same distribution, so that the source kernel distribution that is learned can serve to perform Bayesian inference on the target data.  I am not sure that this assumption holds for the source (MS) and target data (BRATS). The diagnostic tasks are indeed very different, so that, I guess, the kernels are likely to differ. I am not sure that the DWP is the best adapted for this specific transfer learning task. This may explain, why, as suggested by the authors, UNet-DWP does not perform much better than the random initialization (UNet-RI).\nDescription of the DWP method as well as the source (MS) and target (BRATS18) datasets should be detailed, extracting some more details from the recently published paper in Frontiers in Neuroscience.\n", "rating": "3: Weak accept", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "A nice application of DeepWeightPrior", "review": "The authors use deep weight prior to learn an implicit prior distribution over the weights to facilitate transfer learning. \nThis allows the model to mitigate overfitting on the target task when limited labeled data is available. \n\nTo evaluate, an MS dataset was selected as the source and small subsets ofBRATS18 dataset were selected as target. The evaluation was performed on a fixed number of target images but when having access to varying number of labeled data from the target domain. ", "rating": "4: Strong accept", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}, {"title": "Transferring knowledge using Deep Weight Prior", "review": "The paper proposes to apply Deep Weight Prior to the problem of transfer learning in medical imaging. The authors learn a U-Net on MS lesion segmentation and evaluate transferability to the BRATS2018 dataset. The use of DWP is well motivated and the results indicate improved performance over regular transfer learning.\n\nFollowing the results of the paper, I believe the use of DWP can improve settings in medical image analysis with only limited available training data but availability of related datasets. However, the authors should improve the explanation of DWP and introduce the used variables. For example, I assume that $p, k$ in the equation on page 2 refers to the input and output  channel dimensions of the convolutional kernels. It would be interesting to report Dice scores which are more commonly used for the BRATS dataset. It would be beneficial for the authors to release code or add training details to the appendix as the results seem to be irreproducible in its current form. Lastly, a longer study should test different freezing regimes for transfer learning, as freezing the middle seems like a rather arbitrary choice.\n\nMinor:\n- page 2, 1. dataset instead of dataest\n- page 2, the figure and the enumeration could use a little margin between them", "rating": "3: Weak accept", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}], "comment_id": [], "comment_cdate": [], "comment_tcdate": [], "comment_tmdate": [], "comment_readers": [], "comment_writers": [], "comment_reply_content": [], "comment_content": [], "comment_replyto": [], "comment_url": [], "meta_review_cdate": 1586203635467, "meta_review_tcdate": 1586203635467, "meta_review_tmdate": 1586203635467, "meta_review_ddate ": null, "meta_review_title": "MetaReview of Paper256 by AreaChair1", "meta_review_metareview": "All the reviewers recommended acceptance of this work. I agree with them in that it is an interesting work and should be accepted as a short paper in MIDL 2020.\n\nThe reviewers have raised a few points that would be interesting discussing in the final camera ready version. Please, when submitting the final manuscript, try to to address these points.", "meta_review_readers": ["everyone"], "meta_review_writers": ["MIDL.io/2020/Conference/Program_Chairs", "MIDL.io/2020/Conference/Paper256/Area_Chairs"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=3IetMYx3GG&noteId=i1Ai6KE_O9"], "decision": "reject"}