AMSR / conferences_raw /midl19 /MIDL.io_2019_Conference_Skxq9YqJg4.json
mfromm's picture
Upload 3539 files
fad35ef
raw
history blame
No virus
26.1 kB
{"forum": "Skxq9YqJg4", "submission_url": "https://openreview.net/forum?id=Skxq9YqJg4", "submission_content": {"title": "Learning beamforming in ultrasound imaging", "authors": ["Sanketh Vedula", "Ortal Senouf", "Grigoriy Zurakhov", "Alex Bronstein", "Oleg Michailovich", "Michael Zibulevsky"], "authorids": ["sanketh@cs.technion.ac.il", "senouf@campus.technion.ac.il", "grishaz@campus.technion.ac.il", "bron@cs.technion.ac.il", "olegm@uwaterloo.ca", "mzib@cs.technion.ac.il"], "keywords": ["Ultrasound Imaging", "Deep Learning", "Beamforming"], "abstract": "Medical ultrasound (US) is a widespread imaging modality owing its popularity to cost efficiency, portability, speed, and lack of harmful ionizing radiation. In this paper, we demonstrate that replacing the traditional ultrasound processing pipeline with a data-driven, learnable counterpart leads to significant improvement in image quality. Moreover, we demonstrate that greater improvement can be achieved through a learning-based design of the transmitted beam patterns simultaneously with learning an image reconstruction pipeline. We evaluate our method on an in-vivo first-harmonic cardiac ultrasound dataset acquired from volunteers and demonstrate the significance of the learned pipeline and transmit beam patterns on the image quality when compared to standard transmit and receive beamformers used in high frame-rate US imaging. We believe that the presented methodology provides a fundamentally different perspective on the classical problem of ultrasound beam pattern design", "pdf": "/pdf/2005e8c3b1b586dd0443b1d3bd1f2cc6290700e4.pdf", "code of conduct": "I have read and accept the code of conduct.", "paperhash": "vedula|learning_beamforming_in_ultrasound_imaging", "_bibtex": "@inproceedings{vedula:MIDLFull2019a,\ntitle={Learning beamforming in ultrasound imaging},\nauthor={Vedula, Sanketh and Senouf, Ortal and Zurakhov, Grigoriy and Bronstein, Alex and Michailovich, Oleg and Zibulevsky, Michael},\nbooktitle={International Conference on Medical Imaging with Deep Learning -- Full Paper Track},\naddress={London, United Kingdom},\nyear={2019},\nmonth={08--10 Jul},\nurl={https://openreview.net/forum?id=Skxq9YqJg4},\nabstract={Medical ultrasound (US) is a widespread imaging modality owing its popularity to cost efficiency, portability, speed, and lack of harmful ionizing radiation. In this paper, we demonstrate that replacing the traditional ultrasound processing pipeline with a data-driven, learnable counterpart leads to significant improvement in image quality. Moreover, we demonstrate that greater improvement can be achieved through a learning-based design of the transmitted beam patterns simultaneously with learning an image reconstruction pipeline. We evaluate our method on an in-vivo first-harmonic cardiac ultrasound dataset acquired from volunteers and demonstrate the significance of the learned pipeline and transmit beam patterns on the image quality when compared to standard transmit and receive beamformers used in high frame-rate US imaging. We believe that the presented methodology provides a fundamentally different perspective on the classical problem of ultrasound beam pattern design},\n}"}, "submission_cdate": 1544690066325, "submission_tcdate": 1544690066325, "submission_tmdate": 1561398085687, "submission_ddate": null, "review_id": ["SyeSiFuJVN", "H1lMnYzqXE", "H1ld_Nud7V"], "review_url": ["https://openreview.net/forum?id=Skxq9YqJg4&noteId=SyeSiFuJVN", "https://openreview.net/forum?id=Skxq9YqJg4&noteId=H1lMnYzqXE", "https://openreview.net/forum?id=Skxq9YqJg4&noteId=H1ld_Nud7V"], "review_cdate": [1548876189371, 1548523946099, 1548416112361], "review_tcdate": [1548876189371, 1548523946099, 1548416112361], "review_tmdate": [1548882375329, 1548856734665, 1548856728812], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["MIDL.io/2019/Conference/Paper36/AnonReviewer2"], ["MIDL.io/2019/Conference/Paper36/AnonReviewer1"], ["MIDL.io/2019/Conference/Paper36/AnonReviewer3"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["Skxq9YqJg4", "Skxq9YqJg4", "Skxq9YqJg4"], "review_content": [{"pros": "The paper presents a novel methodology to reconstruct ultrasound images from raw echo data. In particular, a neural network model is trained to learn optimal transmitter (Tx) and receiver (Rx) beamforming patterns for fast (high-resolution) ultrasound image acquisition. \n\nI think the idea of designing an end-to-end learning system from Tx signal generation to Rx image reconstruction is an interesting approach, and the authors formulated this in a very nice way. \n\nAdditionally, the paper is very well written. Especially, the fundamental concepts of ultrasound imaging is presented in a clear way as such wider audience can easily follow the content of the paper. \n\n", "cons": "Some minor points \n--\n\n1) I and Q components should be explicitly specified: in-phase (I) and quadrature (Q) components of the echo signal\n\n2) Page 3, please specify parameter t.\n\n3) Page 5 - Tx BF formulation. There is no dependency on j index on the right hand side. Please correct the formula. \n\n4) Page 6: What is the momentum optimiser ? e.g. Adam optimiser can have a momentum term in order to provide smoother parameter convergence (less oscilations in the search space). Is it what is meant? \n\n5) Typo - Page 9 intitlization \n\nMajor comments \n--\n\n1) The presented solution is based on a neural network architecture. However, the paper does not specify in anywhere how this model and results can be reproduced, and what the building blocks of this architecture are? \n(convolution parameterisation? recurrent models?)\n\n2) Please specify that the proposed BFtransform does not have any trainable parameter, but it is introduced to allow gradient flow from Rx BF to Tx BF. \n\n3) Figure 2 - Please specify why the Tx BF layer parameters are shared across both I and Q components? Transducer or hardware limitations? \n\n4) Figure 2 - What are the parameters of the reconstruction network (\\theta) ? \nIt is described as an autoencoder but how is it formulated? \n\n5) The presented approach is evaluated on only a single clinical data. Why not use leave-one-out cross validation?\n\n6) I think the title (\"Learning Beamforming in ultrasound imaging\") is very assertive given that the approach is evaluated on a single cardiac ultrasound scan. I would recommend that the authors reconsider updating the title of the paper to better reflect the content. \n", "rating": "3: accept", "confidence": "3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}, {"pros": "The paper presents what appears to be an interesting approach for beamforming in US imaging. The work proposes a novel way to construct ultrasound images by learning both transmission (Tx) and reception (Rx) parameters of a US imaging system, where the parameters are modelled as a two path (Tx and Rx) neural network which is trained end-to-end. The manuscript is well written and more or less well structured.", "cons": "Although an interesting approach, the manuscript really lacks in way of validation. The very small data set used in this work contained only 6 patients with 4-5 cine loops per patient (the total number of cine loops is not mentioned) and each sequence contains 32 frames. To validate the methodology a single sequence from a patient was left out, which to this reviewer's understanding means there are still 3 or 4 sequences belonging to the test subject in the training data. This is of great concern for two reasons. First, the network could be simply learning the anatomy of the patient visible in the other sequences. Second, even assuming that no sequences of the same patient where left in the training data set, a single test sequence is not nearly sufficient way of validation, and hence the methodology efficacy cannot really be concluded. \n\nOn a different note, the manuscript is missing key implementation details, e.g. the network used to simulate and train the Rx and Tx parameters is never discussed and left only as boxes in a diagram. Images that are (presumably) meant to highlight the results can be hard to interpret, e.g. some ovals are marked in the \"ground truth\" images, however they are never mentioned or pointed at in corresponding locations at the result images, which again makes the reader struggle how to best judge the proposed methodology.", "rating": "2: reject", "confidence": "3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}, {"pros": "This paper presents a method for learning ultrasound transmission patterns together with the corresponding image reconstruction pipeline. The authors performed experiments on a small echocardiographic dataset. The results show an improvement of image reconstruction using the learned settings, compared to the standard procedure.\n\nThe method and application are interesting and definitely relevant to MIDL. The methodology could also be useful to other imaging modalities.\nThe results are good for a proof-of-concept.\n\nThe paper is very well written except 3.3 paragraph \u201cConvergence\u201d, which should be revised.", "cons": "The evaluation is relatively limited. It could be improved on the following points:\n- The method is evaluated by testing on a single cine-loop (32 frames). At least a leave-one-out cross-validation should be performed.\n- The low-resolution acquisitions are simulated from the ground truth single-line acquisition images. Is this equivalent to actually performing an acquisition with the corresponding transmission profile? This should be included in the discussion, and ideally, one could perform a test by acquiring images with the learned parameters.\n- It is not clear why the reconstruction part of the network is pre-trained before training the transmission part, instead of training everything from scratch. This is probably why the network converges to locally optimal solutions near the initial beam profiles.\n- Differences in performance are described as \u201csignificant\u201d, which would need to be backed up with a more thorough evaluation (using cross-validation).\n- Why is the L1-error for the DAS methods missing? It should be reported for completeness.\n- The image quality metrics used (PSNR, contrast\u2026) are quite basic and the authors do not explain how they relate to image interpretability.\n\nThe following papers seem relevant, and could be included in the literature review:\n[A] El-Zehiry, Noha, et al. \"Learning the manifold of quality ultrasound acquisition.\" International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, Berlin, Heidelberg, 2013.\n[B] Abdi, Amir H., et al. \"Automatic quality assessment of echocardiograms using convolutional neural networks: Feasibility on the apical four-chamber view.\" IEEE transactions on medical imaging 36.6 (2017): 1221-1230.\n\nFinally, is there any potential application of the method beyond ultrasound imaging (other types of sensors)?\n\nMinor comments:\n\nPage 3:\n- \u201c4-MLA\u201d is not defined \u2013 it is only defined on page 6\n- \u201cBFTransform\u201d is not used later, \u201cRx beamformer\u201d is used instead. Layer names should be harmonised throughout the text and with Figure 2.\n- The notation for the raw signal should be phi without a hat.\n- The standard symbol for seconds is \u201cs\u201d, not \u201csec\u201d\n\nPage 4:\n- Figure 1 (right) is Figure 2\n- \u201cfinite-sample approximations\u201d\n\nPage 5: \u201cground truth\u201d\n\nPage 7:\n- Text in Figure 3 should be in bigger font\n- In Table 1 the results for \u201c10-MLA\u201d are reported twice. This could be reformatted to save space and include more results.\n\nPage 8:\n- Table 4 is Table 2 or 3\n- Y-axis labels are missing in Figure 4 (bottom)\n", "rating": "3: accept", "confidence": "3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature", "oral_presentation": ["Consider for oral presentation"]}], "comment_id": ["HkehPYS_4V", "HkgayxUdNV", "ryxjGBUOEN", "B1gZzmHdNV"], "comment_cdate": [1549453667633, 1549455332541, 1549456658787, 1549452041197], "comment_tcdate": [1549453667633, 1549455332541, 1549456658787, 1549452041197], "comment_tmdate": [1555945953615, 1555945953379, 1555945953063, 1555945952751], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["MIDL.io/2019/Conference/Paper36/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper36/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper36/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper36/Authors", "MIDL.io/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "No Title", "comment": "We thank the reviewer for the thoughtful review and comments.\nWe will address all the raised minor issues in the next version of the manuscript. As for the major comments, we address them below as they were numbered in the review: \n\nReply to 1, 4:\nWe used fully convolutional neural networks with symmetric skip connections with a downsampling track followed by an upsampling track -- exactly the same as the ones we previously used in [1] & [2]. While in our previous works we trained I & Q networks separately, in this work we train them jointly by optimizing them for the loss defined on the target envelope. We thank the reviewer for bringing the omission to our notice, we will add these details to the next version of the manuscript.\nFor reproducibility, we plan to release the code and we might be able to release the cardiac dataset as well. \n\n[1] Senouf, Ortal, et al. \"High frame-rate cardiac ultrasound imaging with deep learning.\" International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, Cham, 2018.\n[2] Vedula, Sanketh, et al. \"High quality ultrasonic multi-line transmission through deep learning.\" International Workshop on Machine Learning for Medical Image Reconstruction. Springer, Cham, 2018.\n\nReply to 2:\nYes, the BFtransform has two trainable parameters \u201ct\u201d and \u201ctheta\u201d which were fixed all along, while they can pass the gradients to the Tx BF. We mention this at the end of Section 2.1, and in the conclusion, we will rephrase it to make it more clear.\n\nReply to 3:\nWhile we tied the Tx BF layer for I and Q, one can/should indeed optimize them separately to obtain optimal transmission patterns along with beam-steering. This shouldn\u2019t be limited by the hardware and could be a potential future direction.\n\nReply to 5-6:\nPlease refer to the general reply regarding the additional experiments we have performed to support our claims. In the time frame we had we could not perform a leave-one-out cross validation test. However, we evaluated the trained networks on additional cine-loops from the out-of-sample patient and on a newly collected phantom dataset, to verify generalization and to rule out the possible memorization of shapes by the networks. \n"}, {"title": "No Title", "comment": "We want to thank the reviewer for the instructive review. We address below all the raised comments.\n\nTrain/Test split:\nWe indeed left the whole data from one patient out of the training set and not just one of the cine-loops.\n\nInsufficient validation: \nWe thank the reviewer for pointing this aspect that was lacking in our manuscript. We extended our test dataset to include 5 cine-loops (each cine loop consisting of 32 frames, 160 frames in total) from the patient that was left out of the training set. We report results for each cine-loop for different initializations and decimation rates as reported previously, and we observe similar results as that of before. Additionally, in order to test the generalization capability of our model, we evaluate all the pre-trained networks (that were originally trained on the cardiac dataset) on a new dataset of 46 frames collected from a phantom to rule out the possible memorization of shapes by the networks. We observe that our results were able to generalize well to phantoms without any fine-tuning, and we observe that learning the transmit patterns jointly with the receive beamformer (Learned Tx-Rx) consistently outperforms the baseline methods and the learning of the receive beamformer alone. \n\nQuantitative results for the cardiac testset (cineloop-wise) and phantom testset, are provided here: https://vista.cs.technion.ac.il/wp-content/uploads/2019/02/Rebuttal_MIDL2019.pdf. We will add these additional evaluations to the next version of the manuscript.\n\n\nMissing key implementation details:\nThe BFTransform module (appears in Fig.2 as Rx beamformer) is implemented as a spatial transformer layer, as detailed in [1]. \nRegarding the reconstruction network architecture (at the Rx stage) and parameters, the reviewer might be right, we haven't explicitly mentioned it in this paper, however we did refer to the implementation in our previous related papers [2],[3]. The architecture is a multi-resolution encoder-decoder network with symmetric skip connections, similar to the U-net [4]. We will make it clearer in the next version of the manuscript.\nRegarding Tx parameters, it is merely a linear mixing layer, that implements the equation presented at the end of section 2.2. \nFor reproducibility we plan to release the code and we might be able to release the cardiac dataset as well. \n\n[1] Jaderberg, Max, Karen Simonyan, and Andrew Zisserman. \"Spatial transformer networks.\" Advances in neural information processing systems. 2015.\n[2] Senouf, Ortal, et al. \"High frame-rate cardiac ultrasound imaging with deep learning.\" International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, Cham, 2018.\n[3] Vedula, Sanketh, et al. \"High quality ultrasonic multi-line transmission through deep learning.\" International Workshop on Machine Learning for Medical Image Reconstruction. Springer, Cham, 2018.\n[4] Ronneberger, Olaf, Philipp Fischer, and Thomas Brox. \"U-net: Convolutional networks for biomedical image segmentation.\" International Conference on Medical image computing and computer-assisted intervention. Springer, Cham, 2015.\n\nThe marked ovals: \nWe mention the circles in Table 2. The circle represents the areas that were selected in order to calculate contrast and CNR metrics. \n"}, {"title": "No Title", "comment": "We thank the reviewer for the encouraging review! We will incorporate all the minor comments in the next version of the manuscript. We address the major comments below.\n\nLimited evaluation: \nPlease refer to the general reply regarding the additional experiments we have performed to support our claims. In the time frame we had we could not perform a leave-one-out cross validation test. However, we evaluated the trained networks on additional cine-loops from the out-of-sample patient and on a newly collected phantom dataset, to verify generalization and to rule out the possible memorization of shapes by the networks. \n\nValidity of low resolution acquisition emulations: \nSince we referred only to first harmonic imaging, we can assume linearity, meaning applying manipulation on Tx is equivalent to doing it on Rx. This has been shown empirically as well in [1], which we mention in the paper. It\u2019s a common practice of emulating wider beam patterns in ultrasound imaging research. This is a proof-of-concept work to find if there are any \u201ctheoretically\u201d viable transmissions -- we do not have a research machine that allows us to program the transmit, however once we can have one in hand, we will definitely want to configure our patterns and networks into the machines. These fully configurable research machines are already available in the market and some of them even come with a built-in GPU (http://us4us.eu/products/). \n[1] Prieur, Fabrice, et al. \"Correspondence-Multi-line transmission in medical imaging using the second-harmonic signal.\" IEEE transactions on ultrasonics, ferroelectrics, and frequency control 60.12 (2013): 2682-2692.\n\nJustification for two-stage training:\nWe agree with the reviewer on this. Simultaneous learning of forward and inverse operators is a challenging and unsolved (to the best of our knowledge) optimization problem. To allow for controlled experiments and to validate the exact contribution of Tx beamforming, we first pretrain the Rx beamformer and then jointly train both Tx-Rx beamforming. Finding better optimization strategies to train such problem is indeed an open area of research, and once solved, it will allow reaching a global optimum by jointly training Tx-Rx from scratch.\n\nMissing L1 errors for the DAS method:\nWe will fill it in the next version of the manuscript.\n\n\nImage quality metrics and relevant references:\nWe thank the reviewer for the interesting references. While we perform Tx-Rx optimization to reconstruct high-fidelity reconstruction. One can indeed apply similar methodologies optimized for image interpretability metrics [1,2], or directly for analysis tasks such as segmentation, detection, diagnosis etc. We will refer to them appropriately in the next version of the manuscript. \n\nIn this work however, we have used PSNR, contrast, CNR, SSIM metrics since they are the standard measures used in the ultrasound imaging community. \n\n[1] El-Zehiry, Noha, et al. \"Learning the manifold of quality ultrasound acquisition.\" International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, Berlin, Heidelberg, 2013.\n[2] Abdi, Amir H., et al. \"Automatic quality assessment of echocardiograms using convolutional neural networks: Feasibility on the apical four-chamber view.\" IEEE transactions on medical imaging 36.6 (2017): 1221-1230.\n\n\nPotential application beyond ultrasound imaging:\nWe proposed a general-purpose Rx beamforming method that can be readily incorporated for any acoustic beamforming dealing with microphone arrays, whereas the idea of simultaneous learning of forward and inverse operators could be applied for any \u201cactive\u201d imaging systems. Similar ideas have been previously corroborated in computational imaging (for joint learning of optics + image-signal processing pipeline (ISP) [1] and for real-time compressed tomography [2]) demonstrating promising results on real systems . We believe that the proposed methodology holds a great promise for rethinking existing imaging (medical or otherwise) system designs.\n\n[1] H. Haim, S. Elmalem, R. Giryes, A. M. Bronstein, and E. Marom. Depth estimation from a single image using deep learned phase coded mask. IEEE Transactions on Computational Imaging, 4(3):298\u2013310, Sept 2018.\n[2]O. Menashe and A. Bronstein. Real-time compressed imaging of scattering volumes. In 2014 IEEE International Conference on Image Processing (ICIP), pages 1322\u20131326, Oct 2014. \n"}, {"title": "Extended evaluation of our method", "comment": "We would like to thank the reviewers for acknowledging the novelty and potential of our suggested approach of jointly learning optimal acquisition forward model and the inverse reconstruction model in ultrasound imaging, and its possible applications to other imaging modalities beyond ultrasound. \n\nFollowing the common suggestion from all the reviewers, we extended our test dataset to include 5 cine-loops (each cine loop consisting of 32 frames, 160 frames in total) from the patient that was left out of the training set. We report results for each cine-loop for different initializations and decimation rates as reported previously, and we observe similar results as that of before. Additionally, in order to test the generalization capability of our model, we evaluate all the pre-trained networks (that were originally trained on the cardiac dataset) on a new dataset of 46 frames collected from a phantom. We observe that our results were able to generalize well to phantoms without any fine tuning, and we observe that learning the transmit patterns jointly with the receive beamformer (Learned Tx-Rx) consistently outperforms the baseline methods and the learning of the receive beamformer alone. \n\nQuantitative results for the cardiac testset (cineloop-wise) and phantom testset, are provided here: https://vista.cs.technion.ac.il/wp-content/uploads/2019/02/Rebuttal_MIDL2019.pdf. We will add these additional evaluations to the next version of the manuscript.\n"}], "comment_replyto": ["SyeSiFuJVN", "H1lMnYzqXE", "H1ld_Nud7V", "Skxq9YqJg4"], "comment_url": ["https://openreview.net/forum?id=Skxq9YqJg4&noteId=HkehPYS_4V", "https://openreview.net/forum?id=Skxq9YqJg4&noteId=HkgayxUdNV", "https://openreview.net/forum?id=Skxq9YqJg4&noteId=ryxjGBUOEN", "https://openreview.net/forum?id=Skxq9YqJg4&noteId=B1gZzmHdNV"], "meta_review_cdate": 1551356593172, "meta_review_tcdate": 1551356593172, "meta_review_tmdate": 1551881981572, "meta_review_ddate ": null, "meta_review_title": "Acceptance Decision", "meta_review_metareview": "The paper provide an important contribution to the field and majority of the reviewers would like this work to be accepted. The authors provide reasonable justifications for the questions raised by the reviewers. They also mention,in order to provide reproducable research, the code will be released which in turn will imcrease the impact of the work. In the revised version the authors should include all the new information they provide in their rebuttal. ", "meta_review_readers": ["everyone"], "meta_review_writers": ["MIDL.io/2019/Conference"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=Skxq9YqJg4&noteId=B1ethMLH8N"], "decision": "Accept"}