{"forum": "Bke-CJtel4", "submission_url": "https://openreview.net/forum?id=Bke-CJtel4", "submission_content": {"title": "Dynamic MRI Reconstruction with Motion-Guided Network", "authors": ["Qiaoying Huang", "Dong Yang", "Hui Qu", "Jingru Yi", "Pengxiang Wu", "Dimitris N. Metaxas"], "authorids": ["charwinghuang@gmail.com", "don.yang.mech@gmail.com", "hui.qu@cs.rutgers.edu", "jy486@cs.rutgers.edu", "pw241@cs.rutgers.edu", "dnm@cs.rutgers.edu"], "keywords": ["Dynamic MRI reconstruction", "Motion estimation and compensation", "Optical flow"], "abstract": "Temporal correlation in dynamic magnetic resonance imaging (MRI), such as cardiac MRI, is informative and important to understand motion mechanisms of body regions. Modeling such information into the MRI reconstruction process produces temporally coherent image sequence and reduces imaging artifacts and blurring. However, existing deep learning based approaches neglect motion information during the reconstruction procedure, while traditional motion-guided methods are hindered by heuristic parameter tuning and long inference time. We propose a novel dynamic MRI reconstruction approach called MODRN that unitizes deep neural networks with motion information to improve reconstruction quality. The central idea is to decompose the motion-guided optimization problem of dynamic MRI reconstruction into three components: dynamic reconstruction, motion estimation and motion compensation. Extensive experiments have demonstrated the effectiveness of our proposed approach compared to other state-of-the-art approaches.", "pdf": "/pdf/f47247efb4b09e2f87bf373c72909a10772492ee.pdf", "code of conduct": "I have read and accept the code of conduct.", "remove if rejected": "(optional) Remove submission if paper is rejected.", "paperhash": "huang|dynamic_mri_reconstruction_with_motionguided_network", "_bibtex": "@inproceedings{huang:MIDLFull2019a,\ntitle={Dynamic {\\{}MRI{\\}} Reconstruction with Motion-Guided Network},\nauthor={Huang, Qiaoying and Yang, Dong and Qu, Hui and Yi, Jingru and Wu, Pengxiang and Metaxas, Dimitris N.},\nbooktitle={International Conference on Medical Imaging with Deep Learning -- Full Paper Track},\naddress={London, United Kingdom},\nyear={2019},\nmonth={08--10 Jul},\nurl={https://openreview.net/forum?id=Bke-CJtel4},\nabstract={Temporal correlation in dynamic magnetic resonance imaging (MRI), such as cardiac MRI, is informative and important to understand motion mechanisms of body regions. Modeling such information into the MRI reconstruction process produces temporally coherent image sequence and reduces imaging artifacts and blurring. However, existing deep learning based approaches neglect motion information during the reconstruction procedure, while traditional motion-guided methods are hindered by heuristic parameter tuning and long inference time. We propose a novel dynamic MRI reconstruction approach called MODRN that unitizes deep neural networks with motion information to improve reconstruction quality. The central idea is to decompose the motion-guided optimization problem of dynamic MRI reconstruction into three components: dynamic reconstruction, motion estimation and motion compensation. Extensive experiments have demonstrated the effectiveness of our proposed approach compared to other state-of-the-art approaches.},\n}"}, "submission_cdate": 1544749000748, "submission_tcdate": 1544749000748, "submission_tmdate": 1561397658212, "submission_ddate": null, "review_id": ["S1xDD7kgm4", "ryxtTDTPfV", "H1eXb6h6XN"], "review_url": ["https://openreview.net/forum?id=Bke-CJtel4¬eId=S1xDD7kgm4", "https://openreview.net/forum?id=Bke-CJtel4¬eId=ryxtTDTPfV", "https://openreview.net/forum?id=Bke-CJtel4¬eId=H1eXb6h6XN"], "review_cdate": [1547854686776, 1547323329337, 1548762363306], "review_tcdate": [1547854686776, 1547323329337, 1548762363306], "review_tmdate": [1548856711336, 1548856709818, 1548856683420], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["MIDL.io/2019/Conference/Paper122/AnonReviewer3"], ["MIDL.io/2019/Conference/Paper122/AnonReviewer2"], ["MIDL.io/2019/Conference/Paper122/AnonReviewer1"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["Bke-CJtel4", "Bke-CJtel4", "Bke-CJtel4"], "review_content": [{"pros": "The paper introduces a new approach for deep learning-based reconstruction of spatio-temporal MR image sequences from undersampled k-space data. The novelty of the approach lies in the explicit use of motion information (displacements on a voxel level) during the joint reconstruction of all images of the dynamic sequence. It is assumed that this motion information provides useful temporal information and better exploits the dynamics of the underlying physiological process to finally improve reconstruction results.\n\nFrom a methodological point of view, the paper introduces a new objective function for the sequence reconstruction task that does not only include a standard image reconstruction term but also explicit motion estimation and compensation components. In this work, the objective function is minimized by using deep learning. The solution consists of three parts, which are sequentially applied to the results of the preceding part: (1) Initial image reconstruction using a recurrent network, (2) motion estimation using a FlowNet variant, (3) motion compensation by using a residual net. (2) and (3) are fused into one network. In the evaluation, the new approach is extensively compared to state-of-the-art reconstruction approaches in a simulation study based on cardiac image data. The results demonstrate the method\u2019s superior properties in terms of image quality and temporal coherence.\n\nGeneral opinion:\n\nIn my mind the approach presented in this paper is novel and interesting. One might argue that the approach is (at least partially) a combination of different pre-existing methods/papers (U-Net, Data sharing layer, FlowNet, \u2026). However, I think all choices are reasonable and according to the results of the extensive and convincing evaluation, this combination leads to excellent results.\n\nFurther comments:\n\n- I think the discussion of the state-of-the-art approaches most relevant to this work should be extended. In my mind, especially Schlemper et al., 2017 (why wasn\u2019t their 2018 TMI paper used instead?) should be discussed in much greater detail as it is the key competitor in the evaluation. In this context, it remains also unclear why only Schlemper et al., 2017 was used in the evaluation as a representative of deep learning-based reconstruction methods (why not Qin et al., 2018 and Huang et al., 2018). This choice should be discussed in the paper.\n\n- In Fig. 3, the location of the axis captions seems to be odd. They should be placed directly adjacent to their axis to improve readability.\n\n- What is so special about frame #9 that all approaches struggle to reconstruct this image? I assume it is one of the two extrema (end-diastolic phase or end-systolic phase) of the cardiac cycle. Couldn\u2019t this problem for MODRN be alleviated by choosing z_1=end-diastolic phase and z_T= end-systolic phase or vice versa?\n\n- Are the differences between MODRN and all other approaches statistically significant? Please provide results of statistical tests, if possible.\n\n- The original FlowNet paper should be cited.\n\n\nPros:\n- Novel method for learning-based reconstruction of spatio-temporal MR image sequences\n- Explicit inclusion of motion information in the reconstruction process by using a FlowNet-like motion estimation approach\n- Extensive evaluation with very good quantitative and qualitative results", "cons": "Cons:\n- Reasons for the selection of competing state-of-the-art approach in the evaluation unclear\n- Paper is sometimes hard to follow", "rating": "3: accept", "confidence": "2: The reviewer is fairly confident that the evaluation is correct"}, {"pros": "The paper introduces a novel MEMC (motion estimation & compensation) refinement block to improve deep learning based dynamic reconstruction. The paper is a good contribution to the recon field as it opens up a new avenue of research for better understanding motion for the reconstruction. The idea is simple yet the result seems quite impressive. However, many details & comprehensive analyses are missing for one to appreciate the contribution of the proposed components. Overall, I feel that there are too many remaining questions for the paper to be published in its current form. However, conditioned on the fact that the concerns below will be addressed, I believe that the benefits can outweigh, making it acceptable for this conference.", "cons": "Although the authors introduce an interesting idea, my main criticism is the lack of comprehensive details. Completing these details will greatly improve the quality of the paper.\n\n1. Reference: The paper is well-referenced for MR reconstruction & optical flow, however, similar motion modelling has already been considered in video super-resolution. For example:\n\na. Caballero, Jose, et al. \"Real-time video super-resolution with spatio-temporal networks and motion compensation.\" IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2017.\nb. Makansi, Osama, Eddy Ilg, and Thomas Brox. \"End-to-end learning of video super-resolution with motion compensation.\" German Conference on Pattern Recognition. Springer, Cham, 2017.\n\nIndeed, while this paper is the first to apply MEMC framework for DL recon, it is not a new idea for general dynamic inverse problem settings. I would suggest the authors to acknowledge their work.\n\n2. Network detail: while the overall architecture is well-described, there are many details that is lacking: for example, what are the detail & capacity of f_enc, f_dec and f_dec2? what are the resolution scale of these methods and how important is tuning these? How to balance beta and lambda in Eq. 6? Also, have you considered adding DC component at the end of motion compensation block? Wouldn't that further improve the result?\n\n3. Training: Are the DRN and the MEMC components trained end-to-end? From what I gathered, these were trained separately. Is it possible to train both at the same time? \n\n4. Choice of reference images for motion compensation: why z_1 and z_T and not the neighbouring frames? How sensitive is the network for selecting them? If we assumed that the CINE sequence is cyclic, wouldn't z_1 and z_T look similar? Isn't it better to consider, for example, end-systolic vs end-diastolic frames as a reference?\n\n5. Experiment: \n\n(a) the manuscript gave me the impression that only the magnitude component is considered for the motion compensation part. For the experiments, were the complex images used for all components? \n\n(b) it seems that MEMC component applies in general. For example, it can be applied to DC-CNN, or DRN w/o recurrence too. I wonder what the performance would be for them. The key question is, how sensitive is MEMC component to the initial reconstruction? How much can MEMC component compensate for/affected by imperfect reconstruction?\n\n(c) it is more informative to see the motion profile of sample pixel(s), rather than the frame wise RMSE for Fig 3. In this way, we can understand if certain methods under/overestimate the motion.\n\n(d) it concerns me that the method is only evaluated on three subjects. Can cross-validation be performed?\n\n6. MEMC Analysis: It seems that the difference between the performance of U-FlowNet-A & B is very small. I wonder if this is statistically significant? If not, I suggest the authors to remove this component, unless there is a good reason to.", "rating": "3: accept", "confidence": "3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}, {"pros": "The paper presents a novel MR image reconstruction method that can exploit temporal redundancy in an elegant way. The authors propose to extract optical flow between consecutive time frames to gather complimentary information in reconstruction. The flow information is later used in compressed sensing to obtain the final result.\n\nThe paper is overall well written and the approach is clearly motivated. The experimental results obtained on the clinical data demonstrate the benefits of temporal analysis. ", "cons": "- Some literature on MR image reconstruction is missing:\n1) Adversarial loss based DL-Recon approaches. Even if they are not used in benchmarking the proposed algorithm, it would be good to mention those in the introduction. \n2) Similarly, dictionary learning & sparsity based techniques can be added as well. There were examples of such approaches to exploit spatio-temporal information in k-space data: \n\"Dictionary learning and time sparsity for dynamic MR data reconstruction\", TMI 2014. \n\n- Reference is required (page 2)\n\"Different from traditional motion estimation algorithms which may fail in low resolution and weak contrast\"\n\n- I am not sure that the model illustrated in Figure 2 actually optimises the equation (2). In other words, the connection between the equation (2) and (6-7) is not well defined. \n\n- MC(z_t, v_t) does actually depend on z_1 and z_T, this is not captured in the presented formulation. \n\n- Fig.3, why does the DC-CNN model performs significantly worse in the ES phase? Given that the DRN-wo-GRU model has no recurrent unit, I would expect DC-CNN (3D kernels and DC components) to display similar performance. How many consecutive frames are used as input in the DC-CNN model?\n\nMinor comments\n--\n\nCould you specify/display what the y-axis correspond to in Figure 3? (RMSE)? .\n\n\"The number of iterations is set to be N = 3\" -> the number of recurrent (or RNN) iterations is set to be N=3.\n\nPlease include the reference for structural similarity index measure (SSIM).", "rating": "3: accept", "confidence": "2: The reviewer is fairly confident that the evaluation is correct"}], "comment_id": ["SJg4TBhnN4", "Bygq_aohV4", "B1xT3sTxSN", "H1gD4njhE4"], "comment_cdate": [1549743547794, 1549741426092, 1550011316924, 1549741102869], "comment_tcdate": [1549743547794, 1549741426092, 1550011316924, 1549741102869], "comment_tmdate": [1555945992369, 1555945991881, 1555945962090, 1555945954414], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["MIDL.io/2019/Conference/Paper122/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper122/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper122/AnonReviewer3", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper122/Authors", "MIDL.io/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Reply to Reviewer2", "comment": "We appreciate your effort in reviewing our paper. We clarify the issues as follows:\n\nQ1. Reference: cite other papers\nA1: Thanks for your suggestion, we will cite the paper in the future version.\n\nQ2. Network detail: \nresolution scale of each component?\nHere are details of different architecture:\n\nDRN\nConv7-32\nConv3-64\nConv3-128\nConvGru3_128\nDeconv3_64\nConvGru3_128\nDeconv7_2\n\nU-FlowNet\nConv3_ 64\nConv3_ 128\nConv3_256\nConv3_512\nConv3_1024\nDeconv3_512\nDeconv3_256\nDeconv3_128\nDeconv3_64\nConv3_3\n\nResidual UNet\nConv3_ 64\nConv3_ 128\nConv3_256\nConv3_512\nDeconv3_256\nDeconv3_128\nDeconv3_64\nConv3_3\n\nWhat\u2019s the value of Eq.6?\nSetting parameters beta and gamma of equation 6 by simple grid search.\nAccording to their performance , we set beta = 5 and gamma = 0.8106\nbeta gamma Dice\n500 1 0.712\n50 1 0.7971\n5 1 0.7987\n500 0.1 0.7446\n50 0.1 0.771\n5 0.1 0.8106\n\nHow about adding DC at the end of the whole network?\nThis could be a good idea. However, the input of DC layer is complex data while our network outputs 1-channel magnitude data. It is not trivial to add DC layer at the end of our network, but worth an attempt in the future.\n\nQ3. Training: is it possible to train end-to-end?\nA3. Yes, it\u2019s possible. According to our initial result. End-to-end training still gets a promising performance. We consider extending this in our future work.\n\nQ4. Choice of reference images for motion compensation:\nA4. Generally speaking, if z_1 and z_T are close to each other(e.g. T=3,4), the learned motion between them are more accurate. However, choosing z_1 and z_T as neighboring frames, we need more reference frames. And reference frames are not easy to obtain in general scenarios. So we believe selecting z_1 and z_T as the end-systolic and end-diastolic frames make sense. In this way, we only need two reference frames that capture the motion of the whole cardiac cycle.\n\nQ5. Experiments:\nWere complex images used for all components?\nThanks for your suggestion. In this work, we consider it\u2019s easier to capture the motion between magnitude images. But using complex images as input the ME net is an interesting idea. We will explore this possibility in our future work.\n\nCan MEMC component be applied to other Networks? \nYes, the MEMC component is a general technique and we believe it can be applied to different reconstruction networks.\n\nHow sensitive is MEMC to the initial reconstruction?\nWe conduct the reconstruction task on two sampling rates. This means we have considered at least two different initial reconstruction results. And according to our experiments, the motion guided reconstruction results are better than the initial ones. This shows the robustness and effectiveness of our approaches.\n\nHow much can MEMC compensate for imperfect reconstruction?\nYou can compare the results of DRN and MODRN. DRN does not utilize motion information while MODRN does. The results reflect upon how MEMC in MODRN further improves the performance.\n\nMotion profile of sample pixels?\nDue to the page limitation, we will add the motion profile in the future version of the paper.\n\nCross-validation\nWe apply three-fold cross-validation on the whole dataset. Here are the results \nThe average performance of three-fold cross-validation:\nMethods 5times 8times\n RMSE PSNR SSIM RMSE PSNR SSIM\nDC-CNN(3D) 0.0323 30.1516 0.8688 0.0479 26.6298 0.7727\nDRN w/o GRU 0.0376 28.7578 0.8272 0.0503 26.1798 0.7514\nDRN 0.0327 30.0193 0.8627 0.0460 26.9922 0.7805\nMODRN 0.0269 31.6432 0.9146 0.0357 29.1712 0.8753\n\nStd of three-fold cross-validation:\nMethods 5times 8times\n RMSE PSNR SSIM RMSE PSNR SSIM\nDC-CNN(3D) 0.0024 0.7254 0.0108 0.0044 0.8143 0.0196\nDRN w/o GRU 0.0025 0.6220 0.0104 0.0042 0.7337 0.0185\nDRN 0.0025 0.7081 0.0095 0.0035 0.6812 0.0141\nMODRN 0.0016 0.5242 0.0050 0.0023 0.5519 0.0087\n\nRemove U-FlowNet-B, using only U-FlowNet-A?\nFor the table shown in our paper, we only calculated the dice value between neighboring frames. Now, we calculate dice value between z_1 and other frames, z_T and other frames and also neighboring frames and perform three-fold cross-validation. Here are the results:\n U-FlowNet-A U-FlowNet-B\nfold-1 0.8105 1.9122 0.8106 1.9346\nfold-2 0.8474 1.8701 0.8464 1.8041\nfold-3 0.8313 1.8443 0.8349 1.8365\nAverage 0.8297 1.8755 0.8306 1.8584\nWe still consider that U-FlowNet-B performs better than U-FlowNet-A."}, {"title": "Reply to Reviewer1", "comment": "We appreciate your effort in reviewing our paper. We clarify the issues as follows:\n\nQ1.the connection between the equation (2) and (6-7)\nA1. According to equation (2), we could solve the motion fields vt in an optimization way. But for deep learning, the motion field between different frames can be optimized through a deep neural network. And equation(6-7) is the loss function that the network tries to optimize.\n\nQ2.MC(z_t, v_t) does actually depend on z_1 and z_T, this is not captured in the presented formulation. \nA2. Yes, the formulation should be MC(z_t, z_1, z_T). Thank you for correction.\n\nQ3.Why does the DC-CNN model performs significantly worse in the ES phase? Given that the DRN-w/o-GRU model has no recurrent unit, I would expect DC-CNN (3D kernels and DC components) to display similar performance. How many consecutive frames are used as input in the DC-CNN model?\nA3. For Frame 9, it\u2019s an outlier. We have run with different settings and architectures and finally found that frame 9 still gets the worse performance. After deleting frame 9, the problem fixed. For the DC-CNN, we use 10 frames as input, the same as DRN and MODRN.\n\nQ4. Minor comments\nA4. Thanks, we will revise in the future version.\n"}, {"title": "Statistical tests", "comment": "Thank your for replying to my review and the clarifications. However, my comment regarding the statistical tests/statistical significance of the differences between the approaches compared in the evaluation might have been misunderstood. With the term statistical tests I was referring to hypothesis tests like the t-Test, which allow to assess whether the differences observed are 'meaningful' or due to random effects. "}, {"title": "Reply to Reviewer3", "comment": "We appreciate your effort in reviewing our paper. We clarify the issues as follows:\n\nQ1.why choose these SOTA methods?\nA1. For Schlemper et al. TMI 2018 and Qin et al. TMI 2018, they haven\u2019t released their codes yet. In the future version of this paper, we may implement their approaches for comparison. For Huang et al. 2018, it\u2019s a static MRI reconstruction method, which is different from our dynamic reconstruction setting.\n\nQ2. The graph will be updated.\nA1. Thanks for your suggestion, we will update the graph in the future version.\n\nQ3. What is so special about frame #9 that all approaches struggle to reconstruct this image? \nA1. For Frame 9, it\u2019s an outlier. We have run with different settings and architectures and finally found that frame 9 still gets the worse performance. After deleting frame 9, the problem fixed. \n\nQ4. Difference between MODRN and all other approaches statistically significant? Statistical tests?\nA4. We apply three-fold cross-validation on the whole dataset. Here are the results \nThe average performance of three-fold cross-validation:\nMethods 5times 8times\n RMSE PSNR SSIM RMSE PSNR SSIM\nDC-CNN(3D) 0.0323 30.1516 0.8688 0.0479 26.6298 0.7727\nDRN w/o GRU 0.0376 28.7578 0.8272 0.0503 26.1798 0.7514\nDRN 0.0327 30.0193 0.8627 0.0460 26.9922 0.7805\nMODRN 0.0269 31.6432 0.9146 0.0357 29.1712 0.8753\n\nStd of three-fold cross-validation:\nMethods 5times 8times\n RMSE PSNR SSIM RMSE PSNR SSIM\nDC-CNN(3D) 0.0024 0.7254 0.0108 0.0044 0.8143 0.0196\nDRN w/o GRU 0.0025 0.6220 0.0104 0.0042 0.7337 0.0185\nDRN 0.0025 0.7081 0.0095 0.0035 0.6812 0.0141\nMODRN 0.0016 0.5242 0.0050 0.0023 0.5519 0.0087\n\nNew answer: \nThanks for the reviewer to correct our t-test result.\nWe calculate t-test between our method and the other methods, and found that the p values are much less than 0.05. It indicates that our result did not occur by chance and the difference between them is significant.\n\nQ5. Cite the original Flownet paper\nA5. Thanks for your suggestion, we will cite the paper in the future version."}], "comment_replyto": ["ryxtTDTPfV", "H1eXb6h6XN", "H1gD4njhE4", "S1xDD7kgm4"], "comment_url": ["https://openreview.net/forum?id=Bke-CJtel4¬eId=SJg4TBhnN4", "https://openreview.net/forum?id=Bke-CJtel4¬eId=Bygq_aohV4", "https://openreview.net/forum?id=Bke-CJtel4¬eId=B1xT3sTxSN", "https://openreview.net/forum?id=Bke-CJtel4¬eId=H1gD4njhE4"], "meta_review_cdate": 1551356590509, "meta_review_tcdate": 1551356590509, "meta_review_tmdate": 1551881976910, "meta_review_ddate ": null, "meta_review_title": "Acceptance Decision", "meta_review_metareview": "In response to the reviewers' comments, the authors have provided further experimental and architectural details, and reported cross-validation results. \n\nAdditionally, the discussion between the authors and R#3 highlight some possible extensions for the proposed framework, such as (I) use of DC block, (II) training DRN and MEMC components end-to-end, and (III) use of complex images. \n\nBased on the comments below, I recommend acceptance of the manuscript. \n\nIf the authors consider submitting an extension of their work, then I suggest that they include the references suggested by R#3 and R#1 (e.g. motion estimation in inverse problems (CVPR)).", "meta_review_readers": ["everyone"], "meta_review_writers": ["MIDL.io/2019/Conference"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=Bke-CJtel4¬eId=rkeI3f8rUV"], "decision": "Accept"}