AMSR / conferences_raw /midl19 /MIDL.io_2019_Conference_BkltUK71xV.json
mfromm's picture
Upload 3539 files
fad35ef
raw
history blame
No virus
34.6 kB
{"forum": "BkltUK71xV", "submission_url": "https://openreview.net/forum?id=BkltUK71xV", "submission_content": {"title": "AnatomyGen: Deep Anatomy Generation From Dense Representation With Applications in Mandible Synthesis", "authors": ["Amir H. Abdi", "Heather Borgard", "Purang Abolmaesumi", "Sidney Fels"], "authorids": ["amirabdi@ece.ubc.ca", "heather.borgard@ubc.ca", "purang@ece.ubc.ca", "ssfels@ece.ubc.ca"], "keywords": ["Deep generative model", "3D convolutional neural network", "Shape generation", "Geometric morphometrics", "Shape interpolation"], "TL;DR": "A deep architecture to generate 3D anatomies from an abstract representation", "abstract": "This work is an effort in human anatomy synthesis using deep models. Here, we introduce a deterministic deep convolutional architecture to generate human anatomies represented as 3D binarized occupancy maps (voxel-grids). The shape generation process is constrained by the 3D coordinates of a small set of landmarks selected on the surface of the anatomy. The proposed learning framework is empirically tested on the mandible bone where it was able to reconstruct the anatomies from landmark coordinates with the average landmark-to-surface error of 1.42 mm. Moreover, the model was able to linearly interpolate in the Z-space and smoothly morph a given 3D anatomy to another. The proposed approach can potentially be used in semi-automated segmentation with manual landmark selection as well as biomechanical modeling. Our main contribution is to demonstrate that deep convolutional architectures can generate high fidelity complex human anatomies from abstract representations.", "pdf": "/pdf/3bc0f0b0c199b4f3acc2f6b6e7d4c7a22c99b3d1.pdf", "code of conduct": "I have read and accept the code of conduct.", "remove if rejected": "(optional) Remove submission if paper is rejected.", "paperhash": "abdi|anatomygen_deep_anatomy_generation_from_dense_representation_with_applications_in_mandible_synthesis", "_bibtex": "@inproceedings{abdi:MIDLFull2019a,\ntitle={AnatomyGen: Deep Anatomy Generation From Dense Representation With Applications in Mandible Synthesis},\nauthor={Abdi, Amir H. and Borgard, Heather and Abolmaesumi, Purang and Fels, Sidney},\nbooktitle={International Conference on Medical Imaging with Deep Learning -- Full Paper Track},\naddress={London, United Kingdom},\nyear={2019},\nmonth={08--10 Jul},\nurl={https://openreview.net/forum?id=BkltUK71xV},\nabstract={This work is an effort in human anatomy synthesis using deep models. Here, we introduce a deterministic deep convolutional architecture to generate human anatomies represented as 3D binarized occupancy maps (voxel-grids). The shape generation process is constrained by the 3D coordinates of a small set of landmarks selected on the surface of the anatomy. The proposed learning framework is empirically tested on the mandible bone where it was able to reconstruct the anatomies from landmark coordinates with the average landmark-to-surface error of 1.42 mm. Moreover, the model was able to linearly interpolate in the Z-space and smoothly morph a given 3D anatomy to another. The proposed approach can potentially be used in semi-automated segmentation with manual landmark selection as well as biomechanical modeling. Our main contribution is to demonstrate that deep convolutional architectures can generate high fidelity complex human anatomies from abstract representations.},\n}"}, "submission_cdate": 1544661329174, "submission_tcdate": 1544661329174, "submission_tmdate": 1561397365561, "submission_ddate": null, "review_id": ["rkeCMAHDmN", "Bkg8cN2i7V", "Skxp77YiXV"], "review_url": ["https://openreview.net/forum?id=BkltUK71xV&noteId=rkeCMAHDmN", "https://openreview.net/forum?id=BkltUK71xV&noteId=Bkg8cN2i7V", "https://openreview.net/forum?id=BkltUK71xV&noteId=Skxp77YiXV"], "review_cdate": [1548340757538, 1548629133898, 1548616485503], "review_tcdate": [1548340757538, 1548629133898, 1548616485503], "review_tmdate": [1549896443189, 1548856746763, 1548856744439], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["MIDL.io/2019/Conference/Paper31/AnonReviewer3"], ["MIDL.io/2019/Conference/Paper31/AnonReviewer2"], ["MIDL.io/2019/Conference/Paper31/AnonReviewer1"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["BkltUK71xV", "BkltUK71xV", "BkltUK71xV"], "review_content": [{"pros": "The paper is well and clearly written. It is somehow original since I have not yet seen networks reconstructing voxel-based shapes from landmarks and vice versa from that resolution. The resolution is impressive. The abstract and introduction are well written and motivated. The paper is slightly above the page limit but I think that is adequate. The paper is reproducible, especially since both, code and data is or will be made available.\n\n\nedit February 11th:\nI changed my review from reject (tending to strong recject) to accept.\nThe authors did a lot of work to actually adress my concerns. My major concerns where adressed and I think the manuscript should be in much better shape now. \nThe things that are still not intuitive to me are:\n- \"We further clarified that the main clinical application of this method is in \u201cmandibular shape reconstruction for surgical planning\u201d where the normal pre-morbid mandibular form is unknown.\" - Why do you have landmarks available for that task? From the task I would expect a full skull model estimating the mandibular from the full skull.\n- \"Lastly, readers will be instructed on the fact that shape generation from incomplete observation is, intrinsically, \"an ill-posed problem\" and, theoretically, there cannot be a unique solution, which results in a one-to-many mapping.\" - I agree on that, but this paper does not model it as a one to many mapping (like other works do by modeling the posterior distribution).\n", "cons": "Coming from the shape modeling community the paper did some weird design choices and especially the validation of the approach should be improved.\nMy main criticism is the choice of landmarks as latent representation. This leads to a one-to-many mapping to shapes. The shape modeling community tends to model a posterior distribution in such a case. Usually, the aim is to learn this latent representation and I only see drawbacks in this explicit choice. The task is compared to other approaches fully supervised and therefore the spatial resolution is less impressive than for an unsupervised method learning the latent representation. The statement that this resolution was not reached before should at least be put in context to \"Octree Generating Networks: Efficient Convolutional Architectures for High-Resolution 3D Outputs\" at ICCV 2017 presenting a convolutional decoder reaching a resolution of 512^3.\nThe task of reconstructing a shape from landmarks is well studied e.g. by the modeling of posterior distributions. \nThe weakest part of the paper are the experiments. It is hard to estimate the performance of the approach based on those experiments and since the for me obvious experiments or visualizations (see later) are missing. I honestly expect its performance is pretty bad.\nI would suggest to further work on the paper and especially improve on the experiments.\n\n\nHere some detailed comments and suggestions:\n\nIntroduction: \n- The limitation of classical SSMs are not fair not all models are limited to variation by principal modes - a lot of models allow some additional deformations that are regularized not by the statistics of the training data (e.g. Gaussian processes).\n- The statement that the mandible is one of the most complicated and variable anatomies of the human body is weak - it is not obvious to me why that is the case. I would also not agree that cars, chairs and tables are well-formed shapes. The challenges are different, chairs or teeth for example have the challenge of adding or removing legs/roots.\n- The choice of the network architecture looks arbitrary. Choices in the methods part are not motivated and it basically only contains the architecture and the loss functions. The sentence \"we experimented with many deep neural architectures, one of which is depicted in Figure 1\" is perhaps honest but is supporting a trial and error approach instead of a deeper idea behind the architecture.\n- Section 3.3. is named Experiments but contains a description of the choosen latent space. Since the learning is fully supervised I would expect that to be part of the methods section.. The actual experiments performed are described in the results section.\n- The results section contains two different tasks, landmark estimation and reconstruction. I think additional structure with titles would improve readability.\n- Table 1 shows the reconstruction performance given landmarks. The presented values appear extremly high to me. Instead of comparing it to a proper baseline it is compared to the task of segmentation which does not make sense in my eyes. As a simple baseline I would propose to add the thresholded average of the original voxel maps or a reconstruction based on the average landmarks. This would indicate if and how much better the reconstruction is than just taking the average of the data.\n- The average fiducial-to-surface distance measured was 1.89 mm - this again feels quite big. Other publications working on mandible landmarks show the performance per landmark - this could perhaps be added to Figure 2 (since some landmarks are not well defined like the one on the tip of the teeth in front which is not available in the full dataset.\n- For the average surface distance 1.2 mm should also be set in the context of the average as prediction.\n- The shape modelling community came up with some measurements of modeling quality (generalization, specificity, compactness). A full loop would therefore be interesting: New unseen shape -> estimate landmarks -> reconstruct shape.\n- Figure 3 is missing the landmarks - without the given landmarks those reconstructions don't help in estimating the quality of the reconstruction.\n- The landmark reconstruction performance of 3.84 again is hard to evaluate without comparison or context. Since no values to compare are given I head to search for one - so I don't know if the comparison is fair, but \"Deep Geodesic Learning for Segmentation and Anatomical Landmarking\" (TMI 2018) estimated landmarks from images with an segmentation before - they reach ~ 1mm. Adding a figure to allow a qualitative estimation of the quality could help here. Again I would propose to add a landmarkwise number for this to Figure 2.", "rating": "3: accept", "confidence": "2: The reviewer is fairly confident that the evaluation is correct"}, {"pros": "* Deep model to generate high-resolution (140^3) mandible images from the set of surface landmarks (29 landmarks)", "cons": "* It is not clear why f(V) network (auxiliary network) is required in this image generation task. For example, because the input Z is the coordinates of the surface landmarks, the f and g models can be formed as a cycle model for cycle consistency. \n* More training samples are desirable. Currently the number of training samples is just 87. \n* It is desirable to show the input landmarks together in Figure 4. It may be better to understand the mapping between the landmarks and output images (mandible shapes) by the g model.\n* Comparison with the segmentation methods is very confusing. It is recommended to measure the surface distances between the generated model and the surface mesh where the input landmarks are extracted from for the accuracy evaluation in the surface generation. \n", "rating": "2: reject", "confidence": "2: The reviewer is fairly confident that the evaluation is correct"}, {"pros": "\nSummary: \nAuthors present AnatomyGen, a CNN-based approach for mapping from low-dimensional anatomical landmark coordinates to a dense voxel representation and back, via separately trained decoder and encoder networks. The decoder network is made possible by a newly proposed architecture that is based on inception-like transpose convolutional blocks.\nThe paper is written clearly. Methods, materials and validation are of a sufficient quality. There are certain original aspects in this work (latent en-/decoding, inception-based decoder network, latent space interpolation, generalization to previously unseen shapes etc.), but the work may not be as original as authors suggest, since they may not be aware of a very similar work (see Cons), where some of the discussed concepts have already been proposed and explored.", "cons": "\n- Authors explicitly that the work is not intended for segmentation, but many previous shape modeling works (including SSMs) were used as regularization in segmentation. Authors could comment on how their model could be incorporated into (e.g. deep) segmentation approaches, because I do not see an immediate way to do that without requiring the (precise) image-based localization of mandible landmarks in a test volume.\n- I would recommend weakening or at least toning down certain \"marketing\" claims like \"3 times finer than the highest resolution ever investigated in the domain of voxel-based shape generation\", or \"the finest resolution ever achieved among\nvoxel-based models in computer graphics\". First, it is not fully clear where this number 3 comes from, and second, the quality of the work speaks for itself. Further, there is always the chance that authors are not aware of every piece of related literature (in all of computer graphics), as it might be the case here.\n- Authors claim to introduce many concepts for the first time, such as the \"first demonstration that a deep generative architecture can generate high fidelity complex human anatomies in a [...] voxel space [from low-dimensional latents]\". However, I am aware of at least one work where such concepts have been proposed and explored already. CNN-based shape modeling and latent space discovery and was realized for heart ventricle shapes with an auto-encoder, and integrated into Anatomically Constrained Neural Networks (ACNNs) [1]. Their voxel resolution is only sligthly smaller than in this work (120x120x40), with a similar latent dimensionality (64D, here: 3*29=87). Smooth shape interpolation by traversal of the latent space was also demonstrated, and some of their latents also corresponded to reasonable variations in anatomical shape, without being \"restricted\" to statistical modes of variation as discussed here. \n- Compared to the proposed work, where latents represent clinically relevant mandible landmarks, an auto-encoder approach as in ACNN is more general: relevant landmarks as in the mandible cannot be identified for arbitrary anatomies, and a separate training of decoder and decoder as proposed here crucially depends on a semantically meaningful latent space with a supervised mapping to the dense representation (e.g. hand-labeled landmarks vs. voxel labelmaps). In contrast, ACNN auto-encoders train their encoder and decoder in conjunction. How do authors suggest to apply their approach to anatomies where it is impossible (in terms of feasibility and manual effort) to place a sufficiently large number of unique landmarks on the anatomy (e.g. smooth shapes, such as left ventricle in ACNN)?\n- Authors suggest that their solution \"is not constrained by statistical modes of variation\", as e.g. by PCA-based SSM methods. While I agree that the linear latent space assumption of PCA is too simplistic and the global effect of PCA latents on the whole shape often undesirable, the ordering of latents according to \"percent of variance explained\" is actually desirable in terms of interpretability. \n\n[1] Oktay O, Ferrante E, Kamnitsas K, Heinrich M, Bai W, Caballero J, et al. Anatomically Constrained Neural Networks (ACNNs): Application to Cardiac Image Enhancement and Segmentation. IEEE Trans Med Imaging. 2018;37(2):384\u201395. ", "rating": "3: accept", "confidence": "3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}], "comment_id": ["SJlNOd43NN", "HJxv-0EnNN", "H1lNUtNnVE", "rygDyo-1B4"], "comment_cdate": [1549711467685, 1549712895093, 1549711692508, 1549896414613], "comment_tcdate": [1549711467685, 1549712895093, 1549711692508, 1549896414613], "comment_tmdate": [1555945999434, 1555945999172, 1555945992848, 1555945967281], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["MIDL.io/2019/Conference/Paper31/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper31/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper31/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper31/AnonReviewer3", "MIDL.io/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Figures updated (see links), Dataset size increased; Metrics clarified", "comment": "Summary: The following changes are applied in the manuscript to address all of the reviewer's comments:\n- size of training set is increased, \n- landmarks are visualized both figures (links to figures: https://bit.ly/2GmEUst https://bit.ly/2SaHjO6), \n- metrics are clarified and explained in more details,\n- two surface distance metrics (\"landmarks to surface\" and \"surface to surface\"), which were included in the manuscript and further highlighted by the reviewer, are explained in more details to increase readability,\n- the F(V) model is removed from the manuscript and included as the future paths.\n\nThe details of the applied changes and responses to the reviewer are provided below.\n\n\nReviewer\u2019s comment: It is not clear why f(V) network (auxiliary network) is required in this image generation task. For example, because the input Z is the coordinates of the surface landmarks, the f and g models can be formed as a cycle model for cycle consistency.\n \nAnswer: Good point. As stated in the last paragraph of \u201cIntroduction\u201d, the f network is a supplementary contribution of our work only to demonstrate that there can be a reciprocal (bidirectional) mapping between the two data spaces. In fact, we also examined the possibility of a cycle-consistent training, however, due to the under-determined nature of this problem, the cycle-consistent loss did not improve the performance.\nFollowing reviewer\u2019s advice, we will remove the f(V) network from the manuscript as the f(V) (auxiliary network) is an independent process from the main contribution (mesh generation).\n \nReviewer\u2019s comment: More training samples are desirable. Currently the number of training samples is just 87.\n \nAnswer: We acquired more samples and increased the dataset size to 103. The manuscript will be updated with the new results.\n \n \nReviewer\u2019s comment: It is desirable to show the input landmarks together in Figure 4. It may be better to understand the mapping between the landmarks and output images (mandible shapes) by the g model.\n \nAnswer: This is a very good idea. Please check the updated Figure 4 with the landmarks included: https://bit.ly/2SaHjO6\nFollowing your suggestion, we also updated Figure 3 and included the landmarks. Please see updated figure 3 here: https://bit.ly/2GmEUst\nWe are open to adding more figures as supplementary material if suggested by the reviewer.\n \n \nReviewer\u2019s comment: Comparison with the segmentation methods is very confusing.\n \nAnswer: We will clarify why segmentation-based metrics were reported in the results and discussion sections. It will be clarified that these metrics are helpful with respect to two objectives:\n1- They provide a meaningful \u201cbaseline\u201d which the medical imaging community can easily understand, and, in turn, bring our promising results into a familiar context.\n2- The segmentation metrics between the original shape (on which the landmarks were selected) and the generated shape are indicators of the level of similarity of the two. For example, the CMD (contour mean distance) metric calculates the average distance between the surface of the original mesh (where landmarks were selected) and the generated mesh. Moreover, the SO3 (surface overlap) metric also discusses the degree of overlap between the two surfaces, which is another indication of the similarity of the two shapes.\n \n \nReviewer\u2019s comment: It is recommended to measure the surface distances between the generated model and the surface mesh where the input landmarks are extracted from for the accuracy evaluation in the surface generation.\n \nAnswer: We did include this type of metric already in the manuscript (please see Results section), however, we have adjusted our approach, as per the reviewer comment, as well as made our explanation more explicit to ensure our metrics are more replicable. Specifically, we will clarify that we have used two main metrics to demonstrate the performance. The following two landmarks are already available in the manuscript:\n1- Fiducial-to-surface distance: Here, fiducials (landmarks) are selected on the original shape and their distance to the generated shape was calculated.\n2- Contour mean distance (CMD): This metric indicates the average distance of the surface of original mesh (on which the landmarks were selected) and the surface of the generated mesh."}, {"title": "New results for the baseline average shape model and per landmark statistics; Landmarks added to figures; Clinical applications clarified", "comment": "All the comments and suggestions of the reviewer are applied and addressed. The manuscript is in far better shape and we would like to thank the reviewer for their insight. The summary of the main updates are:\n- Following your suggestion, \"Average Mandible Shape\" is generated and all the metrics computed and compared to our method (please see the data below),\n- Landmarks are included in Figures 2 and 3 for qualitative assessment (links to figures: https://bit.ly/2GmEUst https://bit.ly/2SaHjO6)\n- Statistics per landmark are included,\n- We further clarified that the main clinical application of this method is in \u201cmandibular shape reconstruction for surgical planning\u201d where the normal pre-morbid mandibular form is unknown.\n\nThe detailed responses and edits are followed.\n \n\nReviewer\u2019s comment: My main criticism is the choice of landmarks as latent representation. This leads to a one-to-many mapping to shapes\u2026 \n\nAnswer: We will clarify that the main clinical application of this method is in \u201cmandibular shape reconstruction for surgical planning\u201d where the normal pre-morbid mandibular form is unknown. This is along the same lines as \"image inpainting\" but in the 3D geometry domain.\nWe will also clarify that our use of landmarks as the latent representation was only an \"example\" of partial (summarized) representation and reiterate our path towards other high level and abstract latent.\nLastly, readers will be instructed on the fact that shape generation from incomplete observation is, intrinsically, \"an ill-posed problem\" and, theoretically, there cannot be a unique solution, which results in a one-to-many mapping. \n \n\nReviewer\u2019s comments: Since \u2026 obvious experiments or visualizations \u2026 are missing. I honestly expect its performance is pretty bad.\n\nAnswer: We generated the requested figure and it is available here: https://bit.ly/2GmEUst\nThe figure is a 600dpi image, so, please feel free to zoom in and qualitatively evaluate the conformity of the shape surface to the given input landmarks. We hope that upon this new evidence, the perception of method's performance is elevated.\n\n \nReviewer\u2019s comments:\n1- As a simple baseline I would propose to add the thresholded average of the original voxel maps... This would indicate if and how much better the reconstruction is than just taking the average of the data.\n2- For the average surface distance 1.2 mm should also be set in the context of the average as prediction.\n\nAnswer: Upon your suggestion, we generated the average mandible shape and conducted the suggested experiments. Here are the results, which will be added to Table 1:\n\n-- Results of Comparing With Average Shape --\nCMD: Average Shape=2.6 mm vs. Ours=1.2 mm\nHD95: Average Shape=7.6 mm vs. Ours=3.6 mm\nDSC: Average Shape=0.53 vs. Ours=0.73\nSO3: Average Shape=0.76 vs. Ours=0.94\nAverage fiducial-to-surface distance: Average Shape=4.7 mm vs. Ours=1.1 mm\n--\nAs demonstrated above, in all the metrics, our method substantially performed better than the average shape. These information will be included in Table 1 upon your suggestion.\n \n\nReviewer\u2019s comments: The average fiducial-to-surface distance measured was 1.89 mm, this feels quite big.\n\nAnswer: We have adjusted the calculation of the fiducial-to-distance metric for results to be replicable. Specifically, we have now calculated this metric directly from the generated occupancy maps as opposed to the converted surface meshes. The average distance of landmarks to surface is 1.1 +- 0.4 mm. This is almost similar to the resolution of the occupancy maps which are \"1 mm\" across all dimensions.\n(Note: We had previously converted the occupancy maps to meshes using Slicer, and subsequently, calculated the distance from landmark to the surface meshes. The conversion in Slicer added a level of noise as it tried to produce smoother surfaces.)\n\n\nReviewer's comment: Other publications working on mandible landmarks show the performance per landmark \n\nAnswer:We also calculated the per landmark statistics which will be included. The fiducial-to-surface distance of landmarks ranged from 0.18 mm to 3.36 mm (1.1 +- 0.4 mm).\n \n\nReviewer\u2019s comments: Figure 3 is missing the landmarks\n\nAnswer: Figures are updated \nhttps://bit.ly/2GmEUst \nhttps://bit.ly/2SaHjO6\n \n\nReviewer\u2019s comments: The landmark reconstruction performance of 3.84 again is hard to evaluate without comparison or context.\n\nAnswer: Reviewers pointed out the above-the-page-limit state of the manuscript, and the unnecessity of including the landmark detection in this article. To address all comments, it was advised to remove the auxiliary network, f, from the manuscript. This will open up more room to further discuss and clarify other missing points."}, {"title": "Clarified clinical applications; Included landmarks in figures; Expanded literature and discussion while mitigating claims", "comment": "Summary: Following reviewer's comments, we applied the following improvements to manuscript: \n- further clarified our intention for shape generation/reconstruction as opposed to segmentation,\n- landmarks are included in figures 2 and 3 (links to figures: https://bit.ly/2GmEUst https://bit.ly/2SaHjO6), \n- clarified clinical applications of the method and future goals in the introduction and discussion,\n- suggested articles are discussed in the introduction.\n- claims mitigated throughout the paper,\n\nAll reviewer's comments are addressed, the details of which are provided below.\n\n\nReviewer\u2019s comment: Authors could comment on how their model could be incorporated into (e.g. deep) segmentation approaches, because I do not see an immediate way to do that without requiring the (precise) image-based localization of mandible landmarks in a test volume.\n\nAnswer: We will clarify further that we do not intend to do segmentation and resolve any ambiguities for the readers. We would like to reiterate that\n- segmentation is not among the objectives of our method, and \n- comparison with state-of-the-art mandible segmentations (Table 1) are provided as a relatable baseline for the medical imaging community and to put our shape generation performance into a comparable perspective.\nWe will also clarify that the main clinical application of this method is in \u201cmandibular shape reconstruction for surgical planning\u201d where the normal pre-morbid mandibular form is unknown. Clinicians tend to manually mirror the other healthy half of the mandible to estimate the shape, which is inapplicable if cancer exceeds the midline. Our approach is a step towards filling the above-mentioned gap. \n \n\nReviewer\u2019s comment: I would recommend weakening or at least toning down certain \"marketing\" claims like \"3 times finer than the highest resolution ever investigated\u2026\u201d.\n\nAnswer: The claims are mitigated as suggested by the reviewer.\n\n \nReviewers\u2019 comments: CNN-based shape modeling and latent space discovery and was realized for heart ventricle shapes with an auto-encoder and integrated into Anatomically Constrained Neural Networks (ACNNs) (Oktay O et al.).\n\nAnswer: The suggested literature is now discussed in the introduction. While we are aware of the famous TL-Network architecture and its applications on Cardiac data, we would like to highlight the main differences between the ACNNs and the current work:\n1- In the ACNN, the trained encoder of the auto encoder is used as a regularizer in the segmentation and super-resolution tasks; whereas, here, the models are independently learning a mapping between two representations of the shape.\n2- In both tasks discussed in the ACNN paper, the image data is available which constrains the solution, and to some extent, makes it a well-posed problem. However, here, the shape is generated based on a partial observation of the geometry, which is an \u201cill-posed\u201d problem with no unique solutions.\n\n \nReviewer\u2019s comment: How do authors suggest to apply their approach to anatomies where it is impossible to place a sufficiently large number of unique landmarks on the anatomy (e.g. smooth shapes, such as left ventricle in ACNN)?\n\nAnswer: We will clarify our use of landmarks as an \"example\" of partial (summarized) representation in the discussion and reiterate our path towards other high level and abstract forms of shape representation. To be exact, so far, we have investigated shape generation from the following abstract representations:\n1- \"width, height, length\" of the shape (in mm), \n2- volume of the shape (in #voxels).\nWe are currently working on a variational version of this shape generator.\n"}, {"title": "changed review", "comment": "I changed my review from reject (tending to strong recject) to accept.\nThe authors did a lot of work to actually adress my concerns. My major concerns where adressed and I think the manuscript should be in much better shape now. \nThe things that are still not intuitive to me are:\n- \"We further clarified that the main clinical application of this method is in \u201cmandibular shape reconstruction for surgical planning\u201d where the normal pre-morbid mandibular form is unknown.\" - Why do you have landmarks available for that task? From the task I would expect a full skull model estimating the mandibular from the full skull.\n- \"Lastly, readers will be instructed on the fact that shape generation from incomplete observation is, intrinsically, \"an ill-posed problem\" and, theoretically, there cannot be a unique solution, which results in a one-to-many mapping.\" - I agree on that, but this paper does not model it as a one to many mapping (like other works do by modeling the posterior distribution).\n"}], "comment_replyto": ["Bkg8cN2i7V", "rkeCMAHDmN", "Skxp77YiXV", "HJxv-0EnNN"], "comment_url": ["https://openreview.net/forum?id=BkltUK71xV&noteId=SJlNOd43NN", "https://openreview.net/forum?id=BkltUK71xV&noteId=HJxv-0EnNN", "https://openreview.net/forum?id=BkltUK71xV&noteId=H1lNUtNnVE", "https://openreview.net/forum?id=BkltUK71xV&noteId=rygDyo-1B4"], "meta_review_cdate": 1551356596369, "meta_review_tcdate": 1551356596369, "meta_review_tmdate": 1551881982594, "meta_review_ddate ": null, "meta_review_title": "Acceptance Decision", "meta_review_metareview": "In this work a generative convolutional shape network is proposed that creates 3D voxel segmentations from only anatomical landmarks. The rating of this submission is not straightforward. On the one hand, all reviewers see some merit in the method and interest in the generated 3D shapes with relatively high-fidelity. On the other hand, they all find numerous choices confusing and are not entirely convinced of a realistic application. In addition their is valid criticism that no proper baseline was evaluated. \nI have to agree that the scope or impact of this method (which only makes sense when a shape has to be created only based on landmarks) could be limited (because in general these landmarks will have to come from some segmentation). I would imagine that face animation, which models a 3D mesh with many more vertices than landmarks would be the most closely related computer graphics area (cf. 3D Shape Regression for Real-time Facial Animation Siggraph 2013). Furthermore, there have been papers on even more ill-posed problems, e.g. estimating 3D shape from 2D landmarks (see 3D Shape Estimation from 2D Landmarks: A Convex Relaxation Approach CVPR 2015). So it is somewhat hard to believe that no better baseline than the mean-shape was employed (after the rebuttal). What would the performance of a simple (regularised) warping algorithm based on known training segmentations (with their landmarks) be? \nThe very relevant work on convolutional autoencoders with latent-spaces that do not coincides with landmarks (e.g. ACNNs) should be discussed in more detail. One could easily imagine an alternative strategy, where a CAE is trained for reconstruction with segmentations using a 64D latent space and a simple fully-connected MLP is used to map the 29x3D landmark positions into that space. Would this lead to superior results? Thus a more thorough evaluation of hyper-parameter and architecture choices would have also been important as mentioned in the reviews. \nNevertheless, despite these shortcomings I narrowly tend towards acceptance, because as argued by the final reviewer evaluations: the method is of interest, to some degree novel and the results are visually convincing. Yet the paper is still in a somewhat preliminary stage.", "meta_review_readers": ["everyone"], "meta_review_writers": ["MIDL.io/2019/Conference"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=BkltUK71xV&noteId=rkBn3GUHLN"], "decision": "Accept"}