AMSR / conferences_raw /midl19 /MIDL.io_2019_Conference_Hkg0j9sA1V.json
mfromm's picture
Upload 3539 files
fad35ef
raw
history blame contribute delete
No virus
38.9 kB
{"forum": "Hkg0j9sA1V", "submission_url": "https://openreview.net/forum?id=Hkg0j9sA1V", "submission_content": {"title": "Diffeomorphic Autoencoders for LDDMM Atlas Building", "authors": ["Jacob Hinkle", "David Womble", "Hong-Jun Yoon"], "authorids": ["hinklejd@ornl.gov", "womblede@ornl.gov", "yoonh@ornl.gov"], "keywords": ["image registration", "atlas-building", "deep learning", "autoencoder", "LDDMM"], "TL;DR": "We train deep neural networks to estimate image registration simultaneously with LDDMM atlas building", "abstract": "In this work, we present an example of the integration of conventional global and diffeomorphic image registration methods with deep learning. Our method employs a form of autoencoder in which the encoder network maps an image to a transformation and the decoder interpolates a deformable template to reconstruct the input. This enables image-based registration to occur simultaneously with training of deep neural networks, as opposed to current sequential optimization methods. We apply this approach to atlas creation, showing that a system that jointly estimates an atlas image while training the registration encoder network results in a high quality atlas despite drastic dimension reduction. In addition, the shared parametrization for deformations offered by the neural network enables training the atlas with stochastic gradient descent using minibatches on a single GPU. We demonstrate this approach using affine transformations and diffeomorphisms in the LDDMM vector momentum geodesic shooting formulation using the OASIS-3 dataset.", "pdf": "/pdf/756e66abce80871fd057a688ba570d41b84ef611.pdf", "code of conduct": "I have read and accept the code of conduct.", "paperhash": "hinkle|diffeomorphic_autoencoders_for_lddmm_atlas_building"}, "submission_cdate": 1544628902046, "submission_tcdate": 1544628902046, "submission_tmdate": 1545069824672, "submission_ddate": null, "review_id": ["SyWiNc1XN", "Syek2RP3QE", "SyxA2TOzXN"], "review_url": ["https://openreview.net/forum?id=Hkg0j9sA1V&noteId=SyWiNc1XN", "https://openreview.net/forum?id=Hkg0j9sA1V&noteId=Syek2RP3QE", "https://openreview.net/forum?id=Hkg0j9sA1V&noteId=SyxA2TOzXN"], "review_cdate": [1547834520516, 1548676774944, 1548025269545], "review_tcdate": [1547834520516, 1548676774944, 1548025269545], "review_tmdate": [1550017805238, 1550015984935, 1548856748453], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["MIDL.io/2019/Conference/Paper19/AnonReviewer3"], ["MIDL.io/2019/Conference/Paper19/AnonReviewer2"], ["MIDL.io/2019/Conference/Paper19/AnonReviewer1"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["Hkg0j9sA1V", "Hkg0j9sA1V", "Hkg0j9sA1V"], "review_content": [{"pros": "The paper introduces an autoencoder-like network architecture to be used for atlas building/application purposes in a LDDMM setting. Essentially, a deep architecture is defined that:\n(1) allows to estimate an unbiased atlas/template of an image population during network training in an unsupervised fashion\n(2) when trained can be used to estimate mappings between formerly unseen images and the atlas\n\nTo do so, an approximation of the conventional LDDMM atlas building objective is proposed, which is solved by the network/training process presented in the paper. The architecture itself consists of an encoder and a decoder part. The encoder maps an input image to a low-dimensional latent space while the decoder maps a point of the latent space to a deformed version of the atlas that is most similar to the input image. It is important to note that the decoder is composed of three different components (1. latent space to momentum field mapper, 2. EPDiff solver, and 3. atlas image warper) and only the first component is actually learned.\n\nOverall, the paper addresses two important problems (diffeomorphic image registration and atlas building) that have not gained much attention by the deep learning community so far. I, therefore, agree with the statement made in the paper that it is the first to introduce a deep learning-based atlas building method using LDDMM (DL-supported LDDMM registration itself was also used by Yang et al./Quicksilver). The approach presented is quite interesting and well in the scope of MIDL as it allows the integration of components of the well-known LDDMM framework (i.e. EPDiff integration) directly into network architectures to facilitate, for example, deep learning-based computational anatomy methods where the use/estimation of diffeomorphic mappings is crucial. Furthermore, I also like the fact that the authors actively support the idea of open and reproducible research by making their source code available on github (I took a look, but did not review the code) including a PyTorch-module for EPDiff. The evaluation presented can be characterized as somewhat preliminary with only limited experiments (only conventional vs. new atlas building methods for affine and non-linear atlas building are compared) and no real quantitative results. However, the main problem I see with this paper is related to its clarity about a key part of the method presented. \n\nAs I see it, the key part of the paper in terms of novelty is Sec. 2.2, which describes the new atlas building method and how it is solved by using diffeomorphic autoencoders. The introduction of Eq. 8 is easy to follow and the basic description of the autoencoder approach (bottom part of p. 4) is also intelligible. However, at least to me it is somehow unclear how \\overline{I} (the atlas image) is actually computed during the training process. Is it directly learned on a voxel/pixel basis, generated by applying the estimated transformations to the input images, or ...? Maybe I am missing something but this detail is crucial and needs to be added to the paper or clarified if present. I also recommend to describe the network architecture in more detail in Sec. 2.2 as it is hard to understand which parts of the decoder are actually learned and which are static without referring to Sec. 3. This could, for example, be done by moving parts of Sec. 3.2/3.3 to this Section and by improving Fig. 1.\n\nTo sum up, I like the paper and I think it should be presented at MIDL, but the part describing the training-based atlas building should be revised prior to publication.\n\n\nPros:\n\n- LDDMM-based deep learning approach for image registration\n- Atlas building problem solved by training an autoencoder network\n- PyTorch code publicly available", "cons": "Cons:\n\n- Description of the novel parts of the method (partially) unsatisfactory\n- Preliminary evaluation", "rating": "2: reject", "confidence": "3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}, {"pros": "This paper presents a deep learning based approach to unbiased atlas construction based on LDDMM, which directly learns a diffeomorphic atlas deformation predictor from a set of images (alongside the deformable template) instead of regressing pre-computed momenta fields as in Yang et al. (2017). For this, the authors present a deep learning approach whose training replicates the unbiased atlas construction approach of Joshi et al. (2004). The maths are sound and the article is written well. The provided open source implementations are a good reference for other researchers.", "cons": "My main criticism is regarding the motivation of the approach. Despite the expectation set by title and abstract, the integration of atlas-building with deep learning does not actually produce a machine learning model for atlas formation. At the face of it, the authors utilise deep learning methodology to minimize an objective function for atlas creation during the training procedure. The learned model cannot be applied to create an (unbiased!) atlas from new images, though this is suggested by at least the title of this paper. What is learned, however, is a predictor of the initial momenta that maps an image to the deformable template derived during training, but the authors do not motivate such use and do not evaluate the performance of this template registration for images not used during training. With focus on the latter, the learned model should be directly compared to Quicksilver from Yang et al. (2017). In contrast to Yang et al. (2017), the proposed method does not require momenta that have been pre-computed by another algorithm (e.g., conventional LDDMM). In the discussion of closely related work, the authors do not discuss the method of Yang et al. (2017), but only in the conclusion draw a direct comparison to that method.\n\nInstead, the authors could motivate their approach through computational anatomy, where after training their method provides a deformable template that is representable for a given population, and a model that can predict the momenta of the diffeomorphisms that deform this template to new study images. In fact, the authors point this out in the conclusion: \u201cIntegration of deep learning into the atlas creation methodology promises to enable creative new approaches to statistical shape analysis in neuroimaging and other fields\u201c. I would recommend reformulating abstract and introduction to motivate their approach in this context right from the beginning.\n\nBesides this critique, this paper is in my opinion of interest to the conference attendees and may spark some useful discussions. Following are minor remarks.\n\nThe authors write in the abstract that \u201cthe encoder network maps an image to a transformation\u201d and \u201cthe decoder interpolates a deformable template\u201c. I disagree with these statements. Both encoder and decoder together map an image to a transformation, not only the encoder. The authors themselves write later in the method description \u201cnotice that a diffeomorphic autoencoder amounts to a regular image encoder along with a decoder that maps from the latent space to a momentum vector field that is integrated via EPDiff to produce a diffeomorphism\u201d. This contradicts the statements made in the abstract.\n\nWhy have the authors only used 25 out of 990 available brain images for evaluation?", "rating": "3: accept", "confidence": "3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature", "oral_presentation": ["Consider for oral presentation"]}, {"pros": "The authors present a method for constructing LDDMM atlases by combining convolutional neural networks and LDDMM using the momentum parametrisation. Whilst previous works have shown how to combine LDDMM registration and deep neural networks (Yang et al 2017, \"Quicksilver\"), to my knowledge this is the first paper to demonstrate using this method to construct an atlas. The solution presented is elegant and has nice theoretical guarantees on the smoothness of the registration as a result of using the LDDMM framework. This is therefore a very important contribution to the literature, and may be of great practical importance. Overall I think the ideas in this paper give it the potential to be amongst the most interesting at the conference.\n\nIn general the paper is very well written and clear. It has well-chosen comparisons with alternative methods that demonstrate convincingly that the proposed method achieves results that are as least as good as the conventional LDDMM method (with the important caveat highlighted below), whilst requiring just a single forward pass of the model at test time.\n\nI am also very pleased that the authors have made their code publicly available, and that they have separated this into a re-usable library and code to reproduce the specific experiments detailed in the paper.\n\nOverall I feel this is a novel and important methodological contribution but I have some serious concerns that would need to be addressed before the paper can be accepted. I think it is plausible that this could be achieved in the brief rebuttal period, and if so I would be happy to recommend this paper for acceptance.", "cons": "\nI have two very serious concerns about this paper and a number of other suggestions for improvement. \n\nMajor Concern 1: Lack of Results Demonstrating Generalisation to Novel Images\n----------------------------------------------------------------------------------------------------------------\n\nIn the discussion of the dataset, no mention is made of a held-out test set used to evaluate the model. The only numerical results in the paper are in Figure 2, which relate to the value of the objective function *during training*. I am therefore lead to conclude that the images presented in Figures 3 and 4 may well also come from the set of 25 images used for training, as there is no indication otherwise (there is a possibility that this is a misunderstanding). It is unreasonable to evaluate the performance of the model on the dataset that it was trained on, as the neural network may have overfit to the very small number of cases in the training set and the momentum fields it produces for novel may not be meaningful. The entire point of the diffeomorphic autoencoder model is that it can rapidly register novel images to the learnt atlas. Therefore it is of utmost importance that the paper include numerical results demonstrating the performance of the model on unseen images to show that the model has not overfit, and that all figures showing registrations are clearly from images that were not used to train the model so that they reflect the expected quality of the registrations on novel images. I believe that the paper cannot be published without this, however it also seems likely that the authors should be able to rectify this issue quite quickly, especially as there is plenty of unseen data available in the OASIS dataset. For this reason I am advising \"reject\" at this point in time, however I hope that I will be able to change this in the future if satisfactory changes are made.\n\nMajor Concern 2: Insufficient Discussion Of Relationship to Previous Work and Omitted Citations\n-------------------------------------------------------------------------------------------------------------------------------------\n\nThe second serious concern I have is that whilst I believe there is considerable novelty in the proposed method the authors need to take far greater care to highlight their contributions relative to existing work, including some that is not cited in the submitted manuscript.\n\nFirstly, the authors should discuss the relationship between their paper and Yang et al. 2017, (\"Quicksilver...\") in greater depth. There are important similarities between that paper and the submitted manuscript that are not discussed. Yang et al also use a neural network to predict the momentum parametrisation of a geodesic shooting LDDMM method directly from the input image, but this is not sufficiently acknowledged in the literature review, where this is presented as being entirely novel. From my understanding the key differences between Yang et al. 2017 are that \n\na) Yang et al. rely on precomputed momentum estimates to supervise training of their CNN model but in the submitted manuscript they actually train their model by backpropagating through the differentiable EPDiff method and directly optimise the registration loss function\nb) The submitted manuscript also learns an atlas along with the momentum encoder whereas Yang et al. are only concerned with registering a single moving image to a single, known target image.\n\nThese are both very important contributions but I would very much like to see the differences more clearly delineated in the manuscript.\n\nFurthermore, and more importantly, the authors do not discuss the relationship to the following paper:\n\nUnsupervised Learning for Fast Probabilistic Diffeomorphic Registration\nAdrian V. Dalca, Guha Balakrishnan, John Guttag, and Mert R. Sabuncu\nMICCAI 2018, pages 729-738\nhttps://link.springer.com/chapter/10.1007%2F978-3-030-00928-1_82\nor\nhttps://arxiv.org/pdf/1805.04605.pdf\n\nThis work is very close to the current paper in that both use neural networks to speed up diffeomorphic registration, though Dalca et al. do not include atlas learning as part of their framework. Furthermore there appear to be differences in the way that the two papers parametrise the diffeomorphic deformation and implement the solution of the resulting differential equation but unfortunately my knowledge of the underlying mathematics here is not sufficient to comment on this with authority in limited time. What is clear however is that the current paper needs to discuss its relationship to Dalca et al. in some technical detail.\n\nAssorted Minor Comments and Suggestions:\n--------------------------------------------------------------\n\nNow on to some other suggestions that would improve the paper but should not prevent acceptance.\n\nTo start with, a very easy fix - the Figure references have somehow become mixed up. The text frequently refers to Figure 3.2 and 3.3 but there are no such figures!\n\nIn my opinion there is room for improvement in the schematic diagram in Figure 1. It would be clearer if the encoder network were explicitly drawn and the process by which the atlas is involved in the learning process made clearer. It is important that this figure is clear to give readers the best chance of understanding the novel and complex method quickly.\n\nAnother concern is that the authors used only a very very small subset of the available OASIS dataset -- 25 out of nearly 2000 volumes -- to construct their atlas, but no justification for this is offered. It seems like their proposed method should be trivially scalable to any number of volumes, so why not use them? It occurs to me that this may be because the authors wanted a fair comparison with the standard LDDMM technique, and it would take a long time to use the standard LDDMM method on nearly 2000 images. This would be a good justification but the authors should state it explicitly.\n\nThe authors claim that they \"did not observe need for [...] heuristics such as batch normalization\" (section 3.2, first paragraph). However using batch normalisation (or occasionally other sorts of normalisation) has become a de facto standard in neural networks because it is consistently observed to speed up training considerably and also improve generalisation performance. I would suggest the authors try using batch norm in future experiments, unless they are using very small batch sizes.\n\nThe authors claim that one potential reason that the proposed neural network method may reach a lower value of the loss function on the training data than the standard LDDMM method is that mini-batch updates can be used in the optimisation process (section 3.2, second parapgraph). It is difficult to assess this however, as the authors do not state what size of minibatch they used in their experiments.\n\nThe primary justification for using the Diffeomorphic Autoencoder to predict the momentum initialisation for EPDiff is that it should be considerably faster than having to perform an iterative optimisation method to register a new image to the atlas. This is alluded to in section 2.2, but there are no results that actually investigate this. The paper would be far more compelling if results were given for how much faster a novel image can be registered to the atlas using the diffeomorphic auto-encoder versus using the standard auto-encoder. To be clear: I expect that their method is much faster, it has just not been demonstrated. Without this, the authors have not really demonstrated that their method has any advantage over the standard method.\n\nIn the explanation of the momentum encoder model, it is quite unclear how the 64D latent code is transformed into the momentum vector field. The paper simply states the latent code is \"followed by the output vector field of the network of size 3 \u00d7 83 \u00d7 118 \u00d7 110\". More detail is needed here. Are there some upsampling or transposed convolutional layers here or just a fully connected layer followed by a reshaping? I briefly looked at the source code but was not immediately able to figure it out. The reader should be able to understand this without looking at the source code.\n\nThis leads me to a comment on the encoder model. It seems to me that the purpose of including the bottleneck (the 64D layer) in the model is dubious. On the one hand, it allows you do some nice interpolations in the latent space as shown in Figure 4, but is this really practically very useful? Maybe it is if you are looking to do certain types of shape modelling, but this is only briefly alluded to in the conclusion and not in much detail. It also provides a degree of regularisation, but it's not clear that this is necessary. On the other hand, it will likely reduce the ability of the model to match fine details of the two images. Why not instead use a U-Net-like model (or any image-to-image model without a bottleneck) to output the momentum vector field directly? This would enable the network to consider both global and local information in the input image when creating the momentum image, and would therefore probably be more able to match fine details and give a better registration. Dalca et al (see above) use this approach. The paper would benefit from some justification of the authors' choice here, and the authors may like to consider this carefully in future work.\n", "rating": "2: reject", "confidence": "2: The reviewer is fairly confident that the evaluation is correct"}], "comment_id": ["Bkl_oh-aV4", "r1eJMpWpNE", "HkgUcw0erV", "rJgq6vAgHN", "ryxW1RAgSE", "SkgoWBkWHE"], "comment_cdate": [1549765791723, 1549765894969, 1550014349787, 1550014402016, 1550015960681, 1550017794897], "comment_tcdate": [1549765791723, 1549765894969, 1550014349787, 1550014402016, 1550015960681, 1550017794897], "comment_tmdate": [1555945990073, 1555945989794, 1555945961868, 1555945961649, 1555945961393, 1555945961166], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["MIDL.io/2019/Conference/Paper19/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper19/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper19/AnonReviewer1", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper19/AnonReviewer1", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper19/AnonReviewer2", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper19/AnonReviewer3", "MIDL.io/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Response to reviewer comments", "comment": "Firstly, we greatly appreciate the thoughtful reviews we received. We feel that our approach is promising, but we regret that the results we included in the submission are limited and that we did not sufficiently distinguish our work from the existing literature.\n\nGeneralization results and sample size:\n\nWe originally ran the study with 25 images due to the long training time involved and because we had not tested the distributed version of our code in order to use multiple GPUs. Following the submission we focused on scaling these methods and now have fully distributed implementations for all the methods we presented. Running on 16 nodes (96 GPUs) of the Summit supercomputer, we are now able to train both conventional LDDMM atlas or diffeomorphic autoencoder in under two hours using all 1983 images in the OASIS-3 dataset (downscaled by a factor of two in each dimension), and we are working toward achieving full- and super-resolution results by further improving computational performance.\n\nWith respect to generalization, we have now investigated the 25-sample experiment further and indeed found overfitting, with test loss for the autoencoder significantly above the directly minimized loss on the test set. We have added dropout layers to the fully connected portion of our network and are experimenting with other regularization methods like weight decay to try and overcome this.\n\nOur parallel implementation allows us to use many more datapoints to alleviate overfitting, so we are now experimenting using up to about 1600 training examples. So far, in the large-sample regime we find underfitting, wherein the conventional atlas achieves a lower training objective function than the diffeomorphic autoencoder and the autoencoder atlas image is less crisp than the conventional atlas. We are adjusting the network architecture to increase flexibility and experimenting with parameter choices to try and overcome this.\n\nComparison to other methods:\n\nAs R1 and R2 pointed out, we did not adequately contrast our work with the Quicksilver and VoxelMorph approaches of Yang (2017) and Dalca (2018) respectively.\n\nThe Quicksilver approach applies a fully convolutional neural network architecture to pairs of images (each image a separate channel of the input), in order to predict an initial momentum vector field used for LDDMM image matching. This network is trained using momenta that were precomputing in an initial image registration or atlas building step. This approach differs from ours in two important ways: a) it uses conventional LDDMM geodesic shooting as a preprocessing step and b) their predictive deformation model directly uses both the fixed and moving images as inputs.\n\nBy contrast to (a), in our method, LDDMM is incorporated directly into the neural network itself, and the resulting deformed images are used in an MSE loss function. This allows our method to couple the estimation of these momenta across the entire population in order to essentially do dimension reduction in momentum space without an initial LDDMM step, something that is not possible if one starts by estimating each momentum vector field independently.\n\nThe VoxelMorph approach of Dalca et al. is similar to Quicksilver in that it is primarily used for pairwise image registration. It differs in its use of the stationary vector field flow model of Ashburner (2007), and the iterated scaling and squaring methodology for integration of the flow, as opposed to LDDMM. This difference is important to atlas building, since the resulting model does not constitute a distance metric which is necessary for the atlas formulation as a Frechet mean on the diffeomorphism group.\n\nIn the manuscript, we will more clearly describe these differences.\n\nOther issues:\n\nAll three reviewers rightly commented that the clarity could be improved with respect to the motivation of the work, as well as the approach. We should have been more clear that, as R1 mentioned, our primary motivation is not to build new atlases with the trained model, but rather we view the atlas-building methodology as a form of dimension-reduction in the space of diffeomorphisms without a precomputed image registration step. From this perspective, generalization means mapping novel images into a low-dimensional space of diffeomorphisms, which approximately registers the images to the atlas. \n\nWe will rework Fig. 1 and the methods section to do a better job of communicating the architecture we used. We will clearly state that the encoder portion of our network is of the usual kind, a CNN mapping each image to a low-dimensional latent space. The decoder portion of our network maps that latent space to a reconstructed image, but unlike the usual method that uses up-convolutions to directly form the decoded image, our decoder is split into three pieces: a \"momentum decoder\" with a conventional neural architecture that produces a vector field, an EPDiff integration layer, and an \"atlas interpolation\" layer."}, {"title": "Atlas estimation method", "comment": "We did not specify clearly enough that we use fixed step size gradient descent on a voxel-by-voxel basis to optimize the atlas image, accumulating image gradients across iterations and taking one gradient step per epoch. Although in the continuum LDDMM atlas building admits a closed form solution for the atlas image which is the average of the transformed input images weighted by Jacobian determinant, on a finite grid this is no longer the case. A closed-form approximation to the continuum solution is available in lagomorph, as is a Jacobi method, but in our experiments we have found that simple gradient descent reliably provides the best convergence. We will note this in the paper.\n"}, {"title": "Final Thoughts From Reviewer 1", "comment": "Conclusion: reject due to inadequate results, but I'm very excited to see where this work goes next\n\nI think everyone is in agreement here that the authors propose a very interesting method and that this will become an excellent paper. It is novel, elegant, and of great practical importance. Unlike the other reviewers, I am however still of the opinion that the work is not yet ready and that the authors should wait for a future conference or journal. In my view this will be to their advantage because they will have a *much* more convincing paper with a little more work.\n\nThe sticking point for me is that I strongly feel that the paper is inadequate without results showing the generalisation performance on unseen data. Especially since the authors state that registering unseen images to the atlas quickly is the key motivation for their work. The authors state in their response that \n\n\"we have now investigated the 25-sample experiment further and indeed found overfitting, with test loss for the autoencoder significantly above the directly minimized loss on the test set\"\n\nThis is completely unsurprising to me, I would be amazed if a CNN trained from scratch on 25 images for any task did not demonstrate very significant overfitting. This essentially means that the paper completely fails to demonstrate that it meets its main objective.\n\nIn their response the authors describe larger-scale experiments that they have done in the interim, it is unclear from their response but it does not appear that they consider these ready to publish in an updated manuscript if accepted.\n\nI am very satisfied with the authors' responses to my questions about the relationship to previous works. They have clearly thought about this in great detail and raise many important points. I also think their suggestions with regards to renaming the parts of the model would greatly help to clarify the paper."}, {"title": "Some extra advice to authors", "comment": "What follows is intended as friendly advice that the authors might like to consider in their future work, whether or not this manuscript is accepted. On their own these points should not hold this manuscript back if the other reviewers and area chair feel it should be accepted.\n\nI would say the authors are wasting their time trying to fix the generalisation performance on the small dataset with dropout and regularisation - they just need a much bigger dataset to demonstrate that the method works. They have one, so this should not hold them back.\n\nIn the authors' small comment below entitled \"Atlas estimation method\" they say that they take one gradient step per epoch. Furthermore in their response to reviewers, they describe requiring an extremely large number of GPUs in order to use a large number of training images. I may be reading too much into these comments, apologies if this is the case, but to me this suggests that the authors have a fundamental misunderstanding of how batch stochastic gradient descent works in neural network training. It is, in my opinion, folly to try to accumulate gradients across every image in the dataset before making a parameter update, especially when the dataset numbers in the thousands. This will require a huge amount of GPU memory and very complex distribution schemes. Furthermore, using batch sizes that are too large can hurt optimisation performance, and this may well be part of reason for the underfitting that they are observing. Neural networks are typically trained using random batches of around 16, 32, or 64 images, and this can be achieved with only a handful of GPUs. Consequently an epoch will consist of many batches, with one parameter update occurring after each batch. I strongly suggest that the authors experiment with different, much smaller, batch sizes - I think they will find this makes their lives much easier (although their models will train more slowly of course)\n\nI am still confused by the motivation for performing dimensionality reduction in the encoder-decoder. Though I alluded to it in my initial review, the authors did not respond to this part, but I think it's very important so I'm going to try and explain myself better. It seems to me that the authors need to decouple two motivations that seem to have become unhelpfully conflated in the paper: 1. provide a dimensionality reduction in the space of diffeomorphisms and 2. give quick and high-quality diffeomorphic registrations of unseen images to the learnt atlas by predicting an image's momentum parametrisation in a single shot with a neural network (thereby avoiding very slow iterative optimisation). In my opinion the second of these two motivations is of far greater practical importance, though the first may have some niche applications. Maybe the dimensionality reduction is part of a larger scheme of work of which I am unaware, but that is my opinion taking the paper at face value.\n\nTo achieve the second aim, all that is needed fundamentally is a network to map from the image to the momentum parametrisation. In my mind, from the view of a neural network this is just like any other image-to-image mapping problem such as segmentation, for example. Fixating on the dimensionality reduction is, if I had to guess, the most important reason for the underfitting observed with the large dataset that the authors mention in their response. This is like trying to perform segmentation through a bottleneck - the bottleneck will reduce the ability of the network to use the fine details of the input image. This is why successful segmentation models like the UNet and FCN have skip connections, precisely to avoid this information bottleneck and allow the later layers access to the local, low-level details. In the case of this work, I would expect the presence of the bottleneck to lead to very blurred atlases due to lack of the ability to match fine details when doing the registration. The authors state in their comment that this is exactly what they have been observing. Why not just map straight from the image to the momentum parametrisation with some sort of image-to-image convolutional network (e.g. a UNet) and avoid the bottleneck entirely? I strongly suspect the authors would see the underfitting problem significantly reduce if they remove the bottleneck."}, {"title": "Final thoughts from Reviewer 2", "comment": "Overall I agree with the concerns raised by R1. Unlike R1, I believe that if the authors manage to reformulate their introduction and method description appropriately to emphasis that the atlas (mean intensity and shape image) is rather a by-product of the U-net-based group-wise registration learning procedure, this work still merits discussion at a conference such as MIDL. Also, the authors recognize that their work needs to be more directly compared to related pair-wise registration methods. In case of acceptance, and based on the comments of all reviewers, the authors should feel very encouraged to build on this work, and submit an article with a more thoroughly presented and better validated method to a suitable journal.\n\nBecause acceptance demands major revision of the manuscript, I am however reducing my score to better reflect this."}, {"title": "Final thoughts from Reviewer 3", "comment": "After reading the rebuttal and the other reviews, I still think that the paper introduces a very interesting method that is in principle of high interest to the MIDL audience. However, especially the comments from the authors on there overfitting issues (with 25 data sets)/underfitting issues (with 1600 sets) caused me to change my rating to reject. I think in its current form the work/paper is too premature to be published. The overfitting/underfitting issues need to be solved and the paper definitely needs some kind of quantitative comparison to other DL-based registration approaches.\n\nHaving said that, I still encourage the authors to submit their (updated) work to an upcoming conference or a journal."}], "comment_replyto": ["Hkg0j9sA1V", "SyWiNc1XN", "Hkg0j9sA1V", "HkgUcw0erV", "Syek2RP3QE", "SyWiNc1XN"], "comment_url": ["https://openreview.net/forum?id=Hkg0j9sA1V&noteId=Bkl_oh-aV4", "https://openreview.net/forum?id=Hkg0j9sA1V&noteId=r1eJMpWpNE", "https://openreview.net/forum?id=Hkg0j9sA1V&noteId=HkgUcw0erV", "https://openreview.net/forum?id=Hkg0j9sA1V&noteId=rJgq6vAgHN", "https://openreview.net/forum?id=Hkg0j9sA1V&noteId=ryxW1RAgSE", "https://openreview.net/forum?id=Hkg0j9sA1V&noteId=SkgoWBkWHE"], "meta_review_cdate": 1551356609014, "meta_review_tcdate": 1551356609014, "meta_review_tmdate": 1551703153820, "meta_review_ddate ": null, "meta_review_title": "Acceptance Decision", "meta_review_metareview": "Reviewers positively highlighted that the authors made their implementation publicly available. All reviewers agree that the work is promising, interesting, and would likely create good discussions. However, they also raised serious concerns about the current state of the presented experiments. In particular, regarding to issues of overfitting and separation of training and testing data. In the submitted manuscript the approach was tested with 25 samples only. While the authors' rebuttal highlights that more experiments with a much larger number of samples (and presumably also with a clear train/test split) are in the works, it appears unclear at this point what the results will show. There is also relatively limited discussion or comparison to related approaches (e.g., VoxelMorph or Quicksilver), though such a discussion could be easily added in a final version of the paper. Based on these concerns and the rebuttal by the authors two reviewers now recommend rejection and only one recommends acceptance. Hence, the work may not be ready for inclusion in MIDL at its current stage and may greatly benefit from a future inclusion and discussion of the currently conducted experiments. ", "meta_review_readers": ["everyone"], "meta_review_writers": ["MIDL.io/2019/Conference"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=Hkg0j9sA1V&noteId=ryxYazLrLV"], "decision": "Reject"}