AMSR / conferences_raw /midl20 /MIDL.io_2020_Conference_DuWrLOZ27k.json
mfromm's picture
Upload 3539 files
fad35ef
raw
history blame contribute delete
No virus
26.8 kB
{"forum": "DuWrLOZ27k", "submission_url": "https://openreview.net/forum?id=omJASJe9AZ", "submission_content": {"title": "Bayesian Learning of Probabilistic Dipole Inversion for Quantitative Susceptibility Mapping", "authors": ["Jinwei Zhang", "Hang Zhang", "Mert Sabuncu", "Pascal Spincemaille", "Thanh Nguyen", "Yi Wang"], "authorids": ["jz853@cornell.edu", "hz459@cornell.edu", "msabuncu@cornell.edu", "pas2018@med.cornell.edu", "tdn2001@med.cornell.edu", "yiwang@med.cornell.edu"], "keywords": ["Bayesian deep learning", "variational inference", "convolutional neural network", "quantitative susceptibility mapping"], "TL;DR": "Bayesian Learning of Probabilistic Dipole Inversion for Quantitative Susceptibility Mapping", "abstract": "A learning-based posterior distribution estimation method, Probabilistic Dipole Inversion (PDI), is proposed to solve quantitative susceptibility mapping (QSM) inverse problem in MRI with uncertainty estimation. A deep convolutional neural network (CNN) is used to represent the multivariate Gaussian distribution as the approximated posterior distribution of susceptibility given the input measured field. In PDI, such CNN is firstly trained on healthy subjects' data with labels by maximizing the posterior Gaussian distribution loss function as used in Bayesian deep learning. When testing on each patient' data without any label, PDI updates the pre-trained CNN's weights in an unsupervised fashion by minimizing the Kullback\u2013Leibler divergence between the approximated posterior distribution represented by CNN and the true posterior distribution given the likelihood distribution from known physical model and pre-defined prior distribution. Based on our experiments, PDI provides additional uncertainty estimation compared to the conventional MAP approach, meanwhile addressing the potential discrepancy issue of CNN when test data deviates from training dataset.", "track": "full conference paper", "paperhash": "zhang|bayesian_learning_of_probabilistic_dipole_inversion_for_quantitative_susceptibility_mapping", "paper_type": "methodological development", "pdf": "/pdf/ff31d4de510eb296edaa41184dd508026d200f63.pdf", "_bibtex": "@inproceedings{\nzhang2020bayesian,\ntitle={Bayesian Learning of Probabilistic Dipole Inversion for Quantitative Susceptibility Mapping},\nauthor={Jinwei Zhang and Hang Zhang and Mert Sabuncu and Pascal Spincemaille and Thanh Nguyen and Yi Wang},\nbooktitle={Medical Imaging with Deep Learning},\nyear={2020},\nurl={https://openreview.net/forum?id=omJASJe9AZ}\n}"}, "submission_cdate": 1579955621168, "submission_tcdate": 1579955621168, "submission_tmdate": 1587958481097, "submission_ddate": null, "review_id": ["9vs_bxr7rl", "Ia5LdupJx", "Y9tevPxoxO"], "review_url": ["https://openreview.net/forum?id=omJASJe9AZ&noteId=9vs_bxr7rl", "https://openreview.net/forum?id=omJASJe9AZ&noteId=Ia5LdupJx", "https://openreview.net/forum?id=omJASJe9AZ&noteId=Y9tevPxoxO"], "review_cdate": [1584646894948, 1584207675581, 1584063812041], "review_tcdate": [1584646894948, 1584207675581, 1584063812041], "review_tmdate": [1585229475192, 1585229474675, 1585229474105], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["MIDL.io/2020/Conference/Paper6/AnonReviewer3"], ["MIDL.io/2020/Conference/Paper6/AnonReviewer4"], ["MIDL.io/2020/Conference/Paper6/AnonReviewer2"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["DuWrLOZ27k", "DuWrLOZ27k", "DuWrLOZ27k"], "review_content": [{"title": "An adapted but not well validated method for solving quantitative susceptibility mapping", "paper_type": "methodological development", "summary": "This work introduces a Bayesian deep learning approach for solving Quantitative Susceptibility Mapping. Given a local field, the method generates a susceptibility map. Supervised and unsupervised learning are combined in order to generalise from healthy data to data with haemorrhage. The method is principled, fits with the underlying optimisation problem and achieves promising performance.", "strengths": "- the manuscript is well-written. I would like to congratulate the authors for their clarity.\n- the proposed method is principled and relevant regarding the underlying optimisation problem.\n- the solution seems easy to implement and efficient in practice.", "weaknesses": "Method\n- The author argue that, given the \"intrinsic ill-posedness\" of the problem, \"a prior term is needed\". However, the term with the prior introduced in Eq 8 is removed in the final formulation (eq 12). This means that you assume that the network inherently induces some constraints that are beneficial for your problem. I don't see why it would be the case. Moreover, I suspect, as explained later, that better results are obtained without the regularisation because your model overfits.\n- How do you estimate $\\Sigma_{b'|\\Chi)$ in eq. 12?\n\nExperiments\n- it would seem that the unsupervised learning component via VI was trained and tested on the same data. \nIf so, this is for me a major weakness on this work. \nThe network may overfit on the testing data. Moreover, although you don't use any annotation for this task, this can be seen as an optimisation per subject. In this case, there is no clear advantage of using a deep learning approaches compared to other optimisation methods.", "rating": "3: Weak accept", "justification_of_rating": "The paper is clear and the approach is principled. The authors exploit Variationnal Inference with neural networks to solve the optimisation problem, which seems to be novel and a good idea given the formulation. However, I have concerns regarding the validation.", "confidence": "3: The reviewer is fairly confident that the evaluation is correct", "recommendation": ["Poster"], "special_issue": "no"}, {"title": "An interesting application of Bayesian machine learning, while some details are unclear.", "paper_type": "validation/application paper", "summary": "This paper proposes a supervised Bayesian learning approach, namely Probabilistic Dipole Inversion (PDI), to model data uncertainties for the quantitative susceptibility mapping (QSM) inverse problem in MRI. This paper employs the dual-decoder network architecture to represent the approximated posterior distribution and uses the MAP loss function to train the approximate distribution when the labels exist. When having new pathologies in test data, the proposed method minimizes the KL divergence between the approximated posterior distribution and the true posterior distribution based on the variational inference principle to correct the outputs. Experiments show that the proposed method can capture uncertainty compared to other methods.", "strengths": "1. This paper proposes a supervised Bayesian learning approach, namely Probabilistic Dipole Inversion (PDI), to model data uncertainties for the quantitative susceptibility mapping (QSM) inverse problem in MRI. The motivation is good.\n2. Experiments show that the proposed method can capture uncertainty compared to other methods.", "weaknesses": "1.\tThe authors point that they use the forward model in Eq.5 for computation in this paper. However, PDI-VI1 and PDI-VI2 represented by Eq. 11 and Eq. 12, respectively, use the forward model in Eq.4. The authors should unify the expression form according to the actual situation.\n2.\tFor the unsupervised variational inference case, it is not clear whether the Fourier matrix $F$, the dipole kernel $D$, and the noise covariance matrix $\\Sigma_{b|\\chi}$ in the likelihood term are parameters that need to be optimized or have been determined before training.\n3.\tThis paper uses the dual-decoder network architecture to represent the approximated posterior distribution. It\u2019s better to provide the specific network architecture adopted in the experiment\uff0crather than just a simple schematic.\n4.\tThis paper uses three quantitative metrics, namely RMSE, SSIM, and HFEN, to measure the reconstruction quality. Please give the full names of the three quantitative metrics. Other papers (Yoon et al., 2018; Zhang et al., 2020) also use the peak signal-to-noise ratio (pSNR) to measure the reconstruction quality. The performance on the pSNR is better to show in the experimental results. And please further explain why the performance of MEDI is better than PDI and QSMnet on the HFEN metric.\n5.\tThis paper states that the experimental results show the proposed method yields optimal results compared to two types of benchmark methods: deep learning QSM (Yoon et al., 2018; Zhang et al., 2020) and maximum a posteriori (MAP) QSM with convex optimization (Liu et al., 2012; Kee et al., 2017; Milovic et al., 2018). And they compare PDI with MEDI (Liu et al., 2012) and QSMnet (Yoon et al., 2018). It is better to add experiments that compare with other more advanced benchmark methods, such as FINE (Zhang et al., 2020).\n6.\tFigure 3 shows the reconstructions and standard deviation maps of two ICH patients. please explain what the red rectangle highlights for a better understanding.\n7.\tThe presentation should be improved. For example, \u201cIn this paper, we come up with a framework by combining Bayesian deep learning to model data uncertainties and VI with deep learning to approximate true posterior distribution\u201d , and \u201cwe developed a Bayesian dipole inversion framework for quantitative susceptibility mapping by combining variational inference and Bayesian deep learning. \u201c, VI is just an inference method, which could be included in Bayesian ML or Bayesian DL.\n", "questions_to_address_in_the_rebuttal": "1.\tThis paper uses three quantitative metrics, namely RMSE, SSIM, and HFEN, to measure the reconstruction quality. Please give the full names of the three quantitative metrics. Other papers (Yoon et al., 2018; Zhang et al., 2020) also use the peak signal-to-noise ratio (pSNR) to measure the reconstruction quality. The performance on the pSNR is better to show in the experimental results. And please further explain why the performance of MEDI is better than PDI and QSMnet on the HFEN metric.\n2.\tThis paper states that the experimental results show the proposed method yields optimal results compared to two types of benchmark methods: deep learning QSM (Yoon et al., 2018; Zhang et al., 2020) and maximum a posteriori (MAP) QSM with convex optimization (Liu et al., 2012; Kee et al., 2017; Milovic et al., 2018). And they compare PDI with MEDI (Liu et al., 2012) and QSMnet (Yoon et al., 2018). It is better to add experiments that compare with other more advanced benchmark methods, such as FINE (Zhang et al., 2020).", "detailed_comments": "See above", "rating": "3: Weak accept", "justification_of_rating": "This paper proposes a supervised Bayesian learning approach, namely Probabilistic Dipole Inversion (PDI), to model data uncertainties for the quantitative susceptibility mapping (QSM) inverse problem in MRI. The motivation is good and this is an interesting application of Bayesian learning. The results look good, while some details are unclear. The presentation can be further improved.", "confidence": "3: The reviewer is fairly confident that the evaluation is correct", "recommendation": ["Poster"], "special_issue": "no"}, {"title": "A very good contribution to MIDL", "paper_type": "methodological development", "summary": "A Bayesian approach to solving quantitative susceptibility mapping (QSM) inverse problem in MRI is proposed. The authors propose to approximate the posterior distribution of tissue susceptibility using a diagonal-covariance Gaussian with mean, variance predicted by a neural network.\n\nThe overall framework is reminiscent of VAEs, except with a known generative model given by the physics of the problem. From that analogy the inverse problem is approached as that of learning an optimal encoder. The approximating network is pretrained in a supervised manner on healthy subject data with known susceptibility, local field pairs; and fine-tuned on test subjects using KL-divergence minimization.\n\nThe experimental validation is still preliminary, conducted on the order of 20(?) subjects. ", "strengths": "The paper is very well written, the introduction and description of the method are of high quality. The approach is sound. The experimental validation seems limited but the results that are shown are again, well presented and interesting.\n\nI have limited knowledge on the application itself and cannot fully judge, but the paper provides the necessary material to understand the task and challenges at a high level.\n\nOverall the paper is very pleasant to read and the content of the paper / validation is well aligned with the original claims.", "weaknesses": "I do not really have important weaknesses to point out. I have a few minor questions that come to mind about choices made in the paper, but they are not essential to address in the rebuttal (see comments).\n\n", "detailed_comments": "I would be curious about additional insight into the \"two stage\" supervised/unsupervised procedure. What guides this choice? Is there very limited data available? One may think of fine-tuning the network parameters using all cases with unknown local field together (rather than fine-tuned weights per test case). Does it perform worse? Same question when training jointly on all available data, with known or unknown field (using Eq. 9 or 11 as appropriate).\n\nIs the architecture of the decoders (following the paper's terminology) limiting? In terms of formulation, the approach boils down to an approximate inverse problem (approximate solution to Eq. 6 or Eq. 7-8). It would be interesting to know if there are significant gains, in run time (even when using VI for fine-tuning) and/or beyond run time gains. \n\nIs there a trade-off in the number of iterations for which single-test case fine-tuning is performed?\n\nFinally, it would be interesting to extend the approach with a partly learnable prior, so that the approach can fully leverage training data.", "rating": "4: Strong accept", "justification_of_rating": "The paper is very well written, the introduction and description of the method are of high quality. The approach is sound. The experimental validation seems limited but the results that are shown are again, well presented and interesting.\n\nI have limited knowledge on the application itself and cannot fully judge, but the paper provides the necessary material to understand the task and challenges at a high level.\n\nOverall the paper is very pleasant to read and the content of the paper / validation is well aligned with the original claims.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct", "recommendation": ["Oral"], "special_issue": "yes"}], "comment_id": ["07UPNT_RdPq", "MKoNM9QETBU", "LnlEPnWtRVM"], "comment_cdate": [1585323385041, 1585324561749, 1585324216688], "comment_tcdate": [1585323385041, 1585324561749, 1585324216688], "comment_tmdate": [1585324799383, 1585324709768, 1585324216688], "comment_readers": [["everyone", "MIDL.io/2020/Conference/Program_Chairs", "MIDL.io/2020/Conference/Paper6/Area_Chairs", "MIDL.io/2020/Conference/Paper6/Reviewers/Submitted", "MIDL.io/2020/Conference/Paper6/Authors"], ["everyone", "MIDL.io/2020/Conference/Program_Chairs", "MIDL.io/2020/Conference/Paper6/Area_Chairs", "MIDL.io/2020/Conference/Paper6/Reviewers/Submitted", "MIDL.io/2020/Conference/Paper6/Authors"], ["everyone", "MIDL.io/2020/Conference/Program_Chairs", "MIDL.io/2020/Conference/Paper6/Area_Chairs", "MIDL.io/2020/Conference/Paper6/Reviewers/Submitted", "MIDL.io/2020/Conference/Paper6/Authors"]], "comment_writers": [["MIDL.io/2020/Conference/Paper6/Authors", "MIDL.io/2020/Conference"], ["MIDL.io/2020/Conference/Paper6/Authors", "MIDL.io/2020/Conference"], ["MIDL.io/2020/Conference/Paper6/Authors", "MIDL.io/2020/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Addressing the major weakness by fine-tuning on a hemorrhage dataset in the unsupervised VI step and then test", "comment": "\nWe thank the reviewer for the helpful advise. The detailed responses are as follows:\n\n1. Method - The author argue that, given the \"intrinsic ill-posedness\" of the problem, \"a prior term is needed\". However, the term with the prior introduced in Eq 8 is removed in the final formulation (eq 12). This means that you assume that the network inherently induces some constraints that are beneficial for your problem. I don't see why it would be the case. Moreover, I suspect, as explained later, that better results are obtained without the regularisation because your model overfits. \n\nReply: We are sorry for this confusion. Eq. 12 corresponds to the \u2018non-informative prior $ p(\\chi) \\propto c $ as described in the paragraph above Eq. 12, while Eq. 11 corresponds to the prior in Eq. 8. The reviewer was correct that when applying Eq. 12, we are essentially relying on the implicit prior induced by the network - something that has been shown to be effective in prior work*. Based on our experiment, shown in Fig. 3, this implicit network prior works well for our problem.\n*Ulyanov et al. \"Deep image prior.\" CVPR. 2018.\n\n\n2. How do you estimate $\\Sigma_{b'|\\chi}$ in eq. 12? \n\nReply: $\\Sigma_{b'|\\chi}$ was estimated when fitting the local field $b$ from multiple time delayed MR signal, voxelwise (Wang and Liu, 2015). Specifically, $b$ is the linear coefficient of such temporal fitting, and $\\Sigma_{b'|\\chi}$ is the fitting error of each voxel. So $\\Sigma_{b'|\\chi}$ is treated as a known parameter for the QSM dipole inversion problem\n\n\n3. Experiments - it would seem that the unsupervised learning component via VI was trained and tested on the same data. If so, this is for me a major weakness on this work. The network may overfit on the testing data. Moreover, although you don't use any annotation for this task, this can be seen as an optimisation per subject. In this case, there is no clear advantage of using a deep learning approaches compared to other optimisation methods.\n\nReply: The reviewer is correct that the unsupervised VI step in which training and testing was deployed on each hemorrhage patient case can be considered as a weakness, since its run-time is expensive. However, we note that this strategy only needs to be adopted when we encounter a new type of case, e.g. a small number of hemorrhage cases unlike anything in the training data the model has seen before. Our point is that in such a scenario, one can use the unsupervised VI strategy (via Eq. 11 or 12 that do not rely on expensive COSMOS data) to further fine-tune/train the network. We expect that this fine-tuning needs to be done occasionally (on new types of cases - e.g. in a new patient population or after a scanner upgrade). After unsupervised fine-tuning, one can run a single forward pass for inference on future cases, which will be substantially more efficient than the MEDI benchmark. We have conducted some additional experiments to highlight this point. Please see new Fig. 3 results here: http://gdurl.com/YTFy , where PDI-VI1 or PDI-VI2 refers to the above mentioned strategy training (fine-tuning)/validating on 5/1 hemorrhage cases and testing on 2 cases shown in new Fig. 3. Under-estimation issues inside hemorrhage in PDI are also reduced in PDI-VI1 and PDI-VI2 here. (We will also add a new benchmark FINE (Zhang et al., 2020) for comparison as requested by anonReviewer4). Please also see anonReviewer2\u2019s detailed comments regarding the relevant questions and our response 1 for details."}, {"title": "Modifying the unsupervised VI step by fine-tuning the network parameters on a hemorrhage dataset", "comment": "\nWe thank the reviewer for the appreciation of our work. The corresponding responses are listed below:\n\n1. I would be curious about additional insight into the \"two stage\" supervised/unsupervised procedure. What guides this choice? Is there very limited data available? One may think of fine-tuning the network parameters using all cases with unknown local field together (rather than fine-tuned weights per test case). Does it perform worse? Same question when training jointly on all available data, with known or unknown field (using Eq. 9 or 11 as appropriate). \n\nReply: Since the hemorrhage dataset on which we deployed VI with Eq. 11 or 12 was very limited (with only 5 cases by the time we submitted the manuscript), a natural way of applying VI we first came up with was to do it one by one, i.e., training and test on the same data, and the computational cost of doing so for all 5 cases was manageable. \nThanks to the reviewer\u2019s suggestion and 3 more acquired hemorrhage patient data, we will make the change to deploy the VI step using Eq. 11 or 12 to train our model on 5 cases, validation on 1 case and test on independent 2 cases, shown in Fig. 3. Comparable results are achieved now and these new results can be found in our updated Fig. 3: http://gdurl.com/YTFy (We also add new benchmark FINE (Zhang et al., 2020) for comparison as requested by anonReviewer4).\nFuture work will include exploring a semi-supervised learning strategy that uses all available data together with Eq. 9 or Eq. 11/12 as loss functions. \n\n\n2. Is the architecture of the decoders (following the paper's terminology) limiting? In terms of formulation, the approach boils down to an approximate inverse problem (approximate solution to Eq. 6 or Eq. 7-8). It would be interesting to know if there are significant gains, in run time (even when using VI for fine-tuning) and/or beyond run time gains.\n\nReply: In our experience, the adopted U-Net decoder architecture performed very well for image-to-image tasks. Other deep learning QSM methods (Yoon et al., 2018; Zhang et al., 2020) also use a similar architecture. The run time of forward pass in this architecture was quite fast (less than one second, see new Table 1: http://gdurl.com/iR-c ). However, the run time of VI fine-tuning was slow (~ 5 mins for each hemorrhage case). Uncertainty estimation serves as another gain.\n\n\n3. Is there a trade-off in the number of iterations for which single-test case fine-tuning is performed?\n\nReply: In our experiments, at least 100 iterations were needed to \u2018correct\u2019 the maps. But too many iterations might impair the maps.\n\n\n4.Finally, it would be interesting to extend the approach with a partly learnable prior, so that the approach can fully leverage training data.\n\nReply: Thank you for this valuable advice. Possible partly learnable prior could be some additional density estimation network trained by variational autoencoder or adversarial autoencoder on COSMOS data, and then use ELBO($\\chi$) evaluated by such density network to approximate the prior distribution $ \\log p(\\chi)$. We will explore this in the future work.\n"}, {"title": "Adding pSNR metric and FINE benchmark", "comment": "\nWe thank the reviewer for pointing out the unclearness of this paper. The detailed responses are as follows:\n\n1.This paper uses three quantitative metrics, namely RMSE, SSIM, and HFEN, to measure the reconstruction quality. Please give the full names of the three quantitative metrics. Other papers (Yoon et al., 2018; Zhang et al., 2020) also use the peak signal-to-noise ratio (pSNR) to measure the reconstruction quality. The performance on the pSNR is better to show in the experimental results. And please further explain why the performance of MEDI is better than PDI and QSMnet on the HFEN metric. \n\nReply: We will add PSNR values for further comparison. Please check here for new results of Table 1: http://gdurl.com/iR-c . The reason why the baseline method FINE gives the best reconstruction results is that FINE overfits to every test case by minimizing the fidelity loss, which has the major drawback of significantly increased computational time. \nWe will include more details on the definitions of the other metrics. HFEN (high-frequency error norm) is calculated as the L2 difference between Laplacian of a Gaussian (LoG) filtered reference and input volume, where LoG filter extracts the edges of the smoothed objects. In the MEDI formulation Eq. 6, gradient mask $M$ was obtained as the region outside tissue boundaries/edges, while gradient operator smoothed the region in $M$. This way, the brain was smoothed while the tissue edges were retained, which favored the HFEN metric defined above.\n\n\n2. This paper states that the experimental results show the proposed method yields optimal results compared to two types of benchmark methods: deep learning QSM (Yoon et al., 2018; Zhang et al., 2020) and maximum a posteriori (MAP) QSM with convex optimization (Liu et al., 2012; Kee et al., 2017; Milovic et al., 2018). And they compare PDI with MEDI (Liu et al., 2012) and QSMnet (Yoon et al., 2018). It is better to add experiments that compare with other more advanced benchmark methods, such as FINE (Zhang et al., 2020).\n\nReply: Thanks for the suggestion. We will add FINE results for comparison. Please check here for new results of Fig. 2: http://gdurl.com/zW03 .\nAnother change we will make is to fine-tune the pre-trained network using Eq. 11 or 12 on a training set of hemorrhage cases (we now have an additional set of cases), instead of fine-tuning for every single hemorrhage patient case separately. Please see our response 3 to anonReviewer3 and response 1 to anonReviewer2 for details. The corresponding new results of Fig. 3 is: http://gdurl.com/YTFy ."}], "comment_replyto": ["9vs_bxr7rl", "Y9tevPxoxO", "Ia5LdupJx"], "comment_url": ["https://openreview.net/forum?id=omJASJe9AZ&noteId=07UPNT_RdPq", "https://openreview.net/forum?id=omJASJe9AZ&noteId=MKoNM9QETBU", "https://openreview.net/forum?id=omJASJe9AZ&noteId=LnlEPnWtRVM"], "meta_review_cdate": 1586084604012, "meta_review_tcdate": 1586084604012, "meta_review_tmdate": 1586084604012, "meta_review_ddate ": null, "meta_review_title": "MetaReview of Paper6 by AreaChair1", "meta_review_metareview": "This paper proposes a Bayesian deep learning approach for solving Quantitative Susceptibility Mapping. \n\nAll reviewers agree that the paper is well written and the ideas and experiments are novel and interesting.\n\nThe validation is limited but enough in the opinion of the reviewers.\n\n", "meta_review_readers": ["everyone"], "meta_review_writers": ["MIDL.io/2020/Conference/Program_Chairs", "MIDL.io/2020/Conference/Paper6/Area_Chairs"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=omJASJe9AZ&noteId=hxlpTh-qOU-"], "decision": "accept"}