AMSR / conferences_raw /midl19 /MIDL.io_2019_Conference_SkeI0-QelE.json
mfromm's picture
Upload 3539 files
fad35ef
raw
history blame
No virus
23.5 kB
{"forum": "SkeI0-QelE", "submission_url": "https://openreview.net/forum?id=SkeI0-QelE", "submission_content": {"title": "Learning interpretable multi-modal features for alignment with supervised iterative descent", "authors": ["Maximilian Blendowski", "Mattias P. Heinrich"], "authorids": ["blendowski@imi.uni-luebeck.de", "heinrich@imi.uni-luebeck.de"], "keywords": ["Multi-Modal Features", "Image Registration", "Machine Learning"], "TL;DR": "We propose a supervised learning framework for multi-modal image features readily employable for image registration.", "abstract": "Methods for deep learning based medical image registration have only recently approached the quality of classical model-based image alignment. The dual challenge of both a very large trainable parameter space and often insufficient availability of expert supervised correspondence annotations has led to slower progress compared to other domains such as image segmentation. Yet, image registration could also more directly benefit from an iterative solution than segmentation. We therefore believe that significant improvements, in particular for multi-modal registration, can be achieved by disentangling appearance-based feature learning and deformation estimation. In contrast to most previous approaches, our model does not require full deformation fields as supervision but rather only small incremental descent targets generated from organ labels during training. By mapping the complex appearance to a common feature space in which update steps of a first-order Taylor approximation (akin to a regularised Demons iteration) match the supervised descent direction, we can train a CNN-model that learns interpretable modality invariant features. Our experimental results demonstrate that these features can be plugged into conventional iterative optimisers and are more robust than state-of-the-art hand-crafted features for aligning MRI and CT images.", "pdf": "/pdf/d6ad672c7d60909e528f435fb4dcfd3faddcaab1.pdf", "code of conduct": "I have read and accept the code of conduct.", "remove if rejected": "(optional) Remove submission if paper is rejected.", "paperhash": "blendowski|learning_interpretable_multimodal_features_for_alignment_with_supervised_iterative_descent", "_bibtex": "@inproceedings{blendowski:MIDLFull2019a,\ntitle={Learning interpretable multi-modal features for alignment with supervised iterative descent},\nauthor={Blendowski, Maximilian and Heinrich, Mattias P.},\nbooktitle={International Conference on Medical Imaging with Deep Learning -- Full Paper Track},\naddress={London, United Kingdom},\nyear={2019},\nmonth={08--10 Jul},\nurl={https://openreview.net/forum?id=SkeI0-QelE},\nabstract={Methods for deep learning based medical image registration have only recently approached the quality of classical model-based image alignment. The dual challenge of both a very large trainable parameter space and often insufficient availability of expert supervised correspondence annotations has led to slower progress compared to other domains such as image segmentation. Yet, image registration could also more directly benefit from an iterative solution than segmentation. We therefore believe that significant improvements, in particular for multi-modal registration, can be achieved by disentangling appearance-based feature learning and deformation estimation. In contrast to most previous approaches, our model does not require full deformation fields as supervision but rather only small incremental descent targets generated from organ labels during training. By mapping the complex appearance to a common feature space in which update steps of a first-order Taylor approximation (akin to a regularised Demons iteration) match the supervised descent direction, we can train a CNN-model that learns interpretable modality invariant features. Our experimental results demonstrate that these features can be plugged into conventional iterative optimisers and are more robust than state-of-the-art hand-crafted features for aligning MRI and CT images.},\n}"}, "submission_cdate": 1544724941959, "submission_tcdate": 1544724941959, "submission_tmdate": 1561399439424, "submission_ddate": null, "review_id": ["rJx792m1EN", "ByeXeCm2mN", "ryet7gT2XN"], "review_url": ["https://openreview.net/forum?id=SkeI0-QelE&noteId=rJx792m1EN", "https://openreview.net/forum?id=SkeI0-QelE&noteId=ByeXeCm2mN", "https://openreview.net/forum?id=SkeI0-QelE&noteId=ryet7gT2XN"], "review_cdate": [1548856458735, 1548660202967, 1548697633113], "review_tcdate": [1548856458735, 1548660202967, 1548697633113], "review_tmdate": [1550021588931, 1548856749794, 1548856699045], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["MIDL.io/2019/Conference/Paper84/AnonReviewer4"], ["MIDL.io/2019/Conference/Paper84/AnonReviewer1"], ["MIDL.io/2019/Conference/Paper84/AnonReviewer2"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["SkeI0-QelE", "SkeI0-QelE", "SkeI0-QelE"], "review_content": [{"pros": "*** Score revised in response to comments, see discussion below ***\n\n- Cross-modality registration is a relevant and challenging application.\n\n- Learning a shared modality-agnostic feature space and using segmentations to derive a weak supervisory signal for registration seems like an interesting and promising approach.\n\n- Although I cannot recommend acceptance at this stage, I feel the central idea has merit and should be pursued further.", "cons": "This work feels very preliminary and I believe the manuscript can be made much stronger by addressing the following issues for a future submission:\n\n==== Method ====\n- Fundamental assumption for the approximation in Eq. (1) is that displacements are small, e.g. sub-pixel scale. While it may be reasonable for computing (potentially multi-scale) optical flow between consecutive video frames, this condition seems unjustified for the sorts of deformations expected in intra-subject registration (e.g. Fig. 3)---even assuming feature brightness consistency. If this linearisation approach is in fact \"widely used\" in this context, please cite the relevant references backing the claim.\n\n- Otherwise, is the approximation applied iteratively as the moving image is deformed and resampled with small incremental displacements? If this is the case instead, I strongly suggest the authors clarify Section 2.1.\n\n- What is Delta(u,v)? Although it is a crucial element of the proposed pipeline, the actual output of the B-Spline Descent module is never properly defined. The paper could greatly benefit from a clear algorithmic description of all the steps involved.\n\n- What is the dimensionality of the feature maps fed into B-Spline Descent (M and F, and also the corresponding SDMs)? If it is greater than one, the equations as they are written are incorrect (see next point). The exposition can be made much clearer by defining the dimensions of all the variables.\n\n- Improper mathematical notation: x is undefined; some terms are missing x as an argument; unconventional partial derivative notation; missing energy summation over the coordinates (and feature dimensions?). I also suggest the authors switch to matrix notation for a clearer and more general formulation.\n\n- Careful with claims of \"disentanglement\", as it can lead to mischaracterisation of the present contribution. The authors can say the pipeline *decouples* a feature learning step from a deformation estimation step, but this is not a representation learning method (nothing wrong with that), so I'd also be wary of relating it to Shu et al. (2018), for example.\n\n==== Evaluation ====\n- The dataset description needs more information. How many distinct subjects are there and how many scans of each? Are the scans paired across modalities? Are they healthy or pathological cases?\n\n- Unclear why the authors used only 10 scans per modality, when the dataset seems to provide many more annotated scans for the chosen structures (http://www.visceral.eu/assets/Uploads/Anatomy-3-Segmentations.pdf).\n\n- Please clarify the 3D pre-registration step with deeds-SSC. Is it rigid/affine or deformable? What is used as the alignment target?\n\n- Especially with such small sample size, the averages in Table 1 mean very little without error estimates. I suggest the authors tone down the claims of \"significant improvements\" until more rigorous experimental analysis can be performed.\n\n- Missing baselines: How does the full method compare to the purely unsupervised B-Spline Descent? This experimental comparison would make the argument for external supervision much stronger, and would be a fairer competitor to the MIND descriptor. Furthermore, comparison to a traditional pairwise iterative registration method with multi-modal cost function (e.g. mutual information-based) would be greatly informative.", "rating": "3: accept", "confidence": "2: The reviewer is fairly confident that the evaluation is correct"}, {"pros": "- the paper is well written and tackles an important topic of feature interpretability for the purpose of image registration\n- the proposed SUITS approach builds on previous methods and is novel enough to excite the attention of the community", "cons": "- the description of the method should be replaced by a algorithm environment detailing the different steps. The way it is presented in the paper makes it a bit difficult to follow\n- as an additional comparison the authors should consider presenting the results of a CNN based method\n- although not directly related to feature interpretability, the work \"Deformable medical image registration using generative adversarial networks\" , Mahapatra et. al. , ISBI 2018 could be relevant as it uses GANs for multimodal image registration", "rating": "3: accept", "confidence": "3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature", "oral_presentation": ["Consider for oral presentation"]}, {"pros": "The article is well written and describes a complex idea with reasonable clarity. \nThe topic is very relevant (multi modal registration using deep neural networks) and the idea is novel.\n\nThe idea is interesting but requires quite some further development and improved experimentation to prove its merit. I would suggest provisional acceptance in this case. ", "cons": "The authors do not mention from the outset that their method requires (in this case) organ segmentations to guide the registration process. This is a key fact and should be mentioned from the abstract onwards. \nThe experiments carried out are minimal and constitute little more than a \"proof of concept\" as stated by the authors themselves.\nThe method is compared with an alternative which does not make any use of organ segmentations to achieve the registration results. Any gain in accuracy should be considered in this context. ", "rating": "3: accept", "confidence": "2: The reviewer is fairly confident that the evaluation is correct"}], "comment_id": ["SketVjGuEN", "H1lgD8QdE4", "HyxXme7O4N", "HkgwEL7uNE", "S1eb47ebr4"], "comment_cdate": [1549441840774, 1549444696040, 1549443098566, 1549444655229, 1550021417511], "comment_tcdate": [1549441840774, 1549444696040, 1549443098566, 1549444655229, 1550021417511], "comment_tmdate": [1555946025459, 1555946025237, 1555946024985, 1555946024760, 1555945960728], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["MIDL.io/2019/Conference/Paper84/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper84/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper84/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper84/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper84/AnonReviewer4", "MIDL.io/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Summary Response to All Reviewers", "comment": "We would like to thank all reviewers for their insightful and detailed evaluations of our submission. All reviewers highlight the novelty of the proposed iterative supervised descent and the relevance for multi-modal medical imaging. Nevertheless, there are some points of constructive criticism, which we would like to address, namely: 1) adding state-of-the-art comparisons and 2) clarifying certain unclear details. \nWe have conducted two small additional experiments (see Details in the Comments to Reviewer #4), provided direct answers to all individual reviewer questions and also give details of our revision for the final version of the paper.\n"}, {"title": "Response to Reviewer #4", "comment": "We\u2019d like to thank the reviewer for her/his comprehensive evaluation of our submission. In the following, we will address all bullet points (Method: M1-6 and Evaluation: E1-5).\n\nM1&2: We will clarify the iterative nature of our algorithm and include a reference to another seminal work on iterative optical flow [1] achieving very good results. In our method, once the features of the images have been generated by the CNNs, we iteratively warp the moving representation and update the displacement fields incrementally using gradients of the warped features.\n\nWe used the popular Demon\u2019s framework [2] to derive the iterative update equations, which has been extended to multichannel images in [3] and to include B-spline transformations in [4]. We will add these references and rephrase the section \u201cIterative Image Alignment\u201d.\nTo demonstrate that our implementation of the BSpline Descent module is suitable for general purposes, we conducted an additional unsupervised experiment for monomodal CT registration on image intensities. We achieved the following Dice scores for the same labels as in our submission:\nNo registration: \t\t[0.41, 0.34, 0.33, 0.65, 0.34, 0.59] \tMean: \t0.44\nImage Intensities: \t[0.79, 0.65, 0.59, 0.80, 0.63, 0.69] \tMean: \t0.69\nThis result shows the general applicability of this module in the monomodal case, before addressing the difficult problem of learning a shared feature space for different modalities. Including these results further substantiates our claims in the final submission.\n\nM3-5: Every control point (u,v) holds a two-dimensional displacement vector and Delta(u,v) contains the gradients to update the displacement field. After several Delta(u,v) update steps, (u,v)_final is aggregated within the Adam optimiser and used to warp the moving image towards the fixed one. \nThe input feature dimensionalities are 8 channels for the CNNs and 6 for the MIND descriptor per pixel. The equations hold, when written per pixel and feature map and in the end summed along this dimension - a missing aspect that we will clarify in the revised paper. With the mathematical notation also in need of improvement, we agree to add a clear algorithmic description as appendix. However, switching to matrix notation is likely to be more confusing in this case. \n\nM6: We agree that our method decouples feature learning and deformation estimation, differing from Shu et al. 2018, but has a similar objective. It is our aim to learn a shared modality-invariant representation and treat estimating geometric deformations as separate task. These representations enable us to use the B-spline Descent Module during inference without any further trainable steps. In contrast to previous work on deep learning based registration, e.g. [5], the two tasks are thus clearly not entangled. \nHence the central idea of this paper is to use a CNN exclusively to learn a shared feature space and not deformations. Therefore it fits the definition of representation learning given in Bengio et al. (https://doi.org/10.1109/TPAMI.2013.50): \u201c...representation learning, i.e., learning representations of the data that make it easier to extract useful information when building classifiers or other predictors.\u201d \n\nLiterature:\n[1] Papenberg et al. https://doi.org/10.1007/s11263-005-3960-y\n[2] Vercauteren et al. https://doi.org/10.1016/j.neuroimage.2008.10.040\n[3] Guimond et al. https://doi.org/10.1109/ISBI.2002.1029369\n[4] Tustison, Nicholas James. https://doi.org/10.3389/fninf.2013.00039\n[5] Hu et al. https://doi.org/10.1016/j.media.2018.07.002\n\nE1-2: We restricted ourselves to 10 unpaired, abdominal scans (some with pathologies) per modality that provide similar slice thickness, in contrast e.g. to the whole-body scans of the VISCERAL dataset.\n\nE3: We compensate most through-plane deformations (with a nonlinear transformation) using this deformable pre-registration step to effectively create a 2D registration task. \n\nE4: We will remove the statement about statistical significance for now.\n\nE5: Directly applying an unsupervised B-Spline descent and registering MR and CT images based on their intensity values would violate the brightness consistency assumption and yield no meaningful results. The MIND descriptor as state-of-the-art multimodal feature descriptor has been successfully applied to a large variety of image registration tasks and is therefore a valid and fair competitor.\nAs suggested, we conducted additional experiments with a state-of-the-art algorithm (SimpleElastix) for multi-modal images (metric: mutual information, 4 level multi-resolution alignment with affine preregistration) and achieved similar DICE scores: \nElastix \t\t[0.75, 0.68, 0.58, 0.72, 0.68, 0.76] \tMean: 0.70\nComparing the results of this additional experiment with our proposed approach underpins the expressiveness of our learned features (mean 0.72) and will therefore be added in our final submission as another baseline."}, {"title": "Response to Reviewer #2", "comment": "We thank the reviewer for her/his sound evaluation of our work.\nWe agree, that we have to clarify the need of organ segmentations to guide the registration process during the training phase and we will be happy to include this in a revised version of our manuscript. \n\nWith regard to the concern that we compare our approach to method which does not make any use of organ segmentations, we will state more clearly in a final submission, that the segmentations are only used during training. During inference, our approach only relies on the given scans and we therefore compare it to registration methods processing the same input data. The result discussion in the final version will more clearly differentiate between supervised and unsupervised methods.\n\nIn addition to the MIND baseline, we conducted another experiment (see Comments to Reviewer #4) with the SimpleElastix toolkit, that implements a classical multi-level, multi-modal registration pipeline with Mutual Information as metric. Its results will be incorporated in our revised submission. \n"}, {"title": "Response to Reviewer #1", "comment": "We appreciate the reviewers informative remarks to our submission.\n\nWe will follow the advise to present our method by detailing the different steps in an algorithmic environment. Thus, we will revise our submission by adding an algorithmic environment as appendix and additionally elaborate our schematic figure to be step-by-step comprehensible.\n\nBecause we have performed proof-of-concept experiments on a 2D task for this submission, a direct comparison to public 3D CNN methods is not directly possible. We do plan to extend our idea to 3D data and thus enable a comparison to other supervised methods, e.g. Hu et al. \u201cWeakly-supervised convolutional neural networks for multimodal image registration\u201d, Medical Image Analysis 2018.\n\nWe thank the reviewer for pointing out the reference to Mahapatra et al. and we will include it in our final submission.\n"}, {"title": "Welcome clarifications and extra experiments; still some issues", "comment": "Thank you for responding to most of my comments and apologies for the late reply. I have revised the score to 3, expecting the authors to further address the following:\n\n- M1&2: Thank you for reporting an additional experiment and clarifying the incremental warping; this needs to be explained transparently in the text, along with the mentioned references. Also, consider adding a statement on why the proposed approach is superior to computing exact automatic gradients without the linearization (e.g. speed? memory?).\n\n- M3-5: Considering the implicit sum over feature channels, eq. (3) is no longer a solution to (2). Although a similar expression is used heuristically in Guimond et al. (2002, eq. 4), the average of individual solutions for each channel is not the same as the optimum of the joint problem over all channels. It may be necessary to revise the derivation or the explanatory text.\n\n- M6: I meant representation learning as encoding an entire image as a vector, as opposed to learning local convolutional features. Moreover, in that context 'disentanglement' usually refers to separating explanatory factors via unsupervised learning, rather than by manual design. Again: nothing wrong with that, but it's better to be mindful of established terminology.\n\n- E1-2: Thank you for the details; please make sure they appear in the final version.\n\n- E3: As the pre-registration is deformable, the text needs to explain how it avoids making the 2D registration task trivial (e.g. regularization?).\n\n- E5: The Elastix baseline is a great addition for comparison. I did not imply that MIND is a weak baseline, but that it does not use supervision (as also noted by Reviewer #2). The analogous unsupervised baseline in the proposed framework would be to train the CNN features with the registration loss itself. Even (especially) if it completely fails, these results would strengthen the contribution."}], "comment_replyto": ["SkeI0-QelE", "rJx792m1EN", "ryet7gT2XN", "ByeXeCm2mN", "H1lgD8QdE4"], "comment_url": ["https://openreview.net/forum?id=SkeI0-QelE&noteId=SketVjGuEN", "https://openreview.net/forum?id=SkeI0-QelE&noteId=H1lgD8QdE4", "https://openreview.net/forum?id=SkeI0-QelE&noteId=HyxXme7O4N", "https://openreview.net/forum?id=SkeI0-QelE&noteId=HkgwEL7uNE", "https://openreview.net/forum?id=SkeI0-QelE&noteId=S1eb47ebr4"], "meta_review_cdate": 1551356574563, "meta_review_tcdate": 1551356574563, "meta_review_tmdate": 1551881974412, "meta_review_ddate ": null, "meta_review_title": "Acceptance Decision", "meta_review_metareview": "This paper develops a cross-modality registration framework that decouples the feature learning and deformation estimation steps. All three reviewers agree that the paper presents a unique and interesting idea with application to medical imaging datasets. There was some concern that the method and results were preliminary. The authors did a good job of responding to the reviewer concerns by running additional experiments and clarifying the statements in their paper. They also plan to incorporate the reviewer comments into their final submission.\n\nI would recommend this paper for acceptance to MIDL. Based on the comment by Reviewer 1, I would also suggest that this paper be accepted for an oral presentation. ", "meta_review_readers": ["everyone"], "meta_review_writers": ["MIDL.io/2019/Conference"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=SkeI0-QelE&noteId=BkewifLSLN"], "decision": "Accept"}