AMSR / conferences_raw /midl19 /MIDL.io_2019_Conference_rJen0zC1lE.json
mfromm's picture
Upload 3539 files
fad35ef
raw
history blame contribute delete
No virus
34.5 kB
{"forum": "rJen0zC1lE", "submission_url": "https://openreview.net/forum?id=rJen0zC1lE", "submission_content": {"title": "Image Synthesis with a Convolutional Capsule Generative Adversarial Network", "authors": ["Cher Bass", "Tianhong Dai", "Benjamin Billot", "Kai Arulkumaran", "Antonia Creswell", "Claudia Clopath", "Vincenzo De Paola", "Anil Anthony Bharath"], "authorids": ["c.bass14@imperial.ac.uk", "tianhong.dai15@imperial.ac.uk", "benjamin.billot.18@ucl.ac.uk", "kailash.arulkumaran13@imperial.ac.uk", "antonia.creswell11@imperial.ac.uk", "c.clopath@imperial.ac.uk", "vincenzo.depaola@csc.mrc.ac.uk", "a.bharath@imperial.ac.uk"], "keywords": ["Capsule Network", "Generative Adversarial Network", "Neurons", "Axons", "Synthetic Data", "Segmentation", "Image Synthesis", "Image-to-Image Translation"], "TL;DR": "Synthesising biomedical images using a convolutional capsule generative adversarial network.", "abstract": "Machine learning for biomedical imaging often suffers from a lack of labelled training data. One solution is to use generative models to synthesise more data. To this end, we introduce CapsPix2Pix, which combines convolutional capsules with the pix2pix framework, to synthesise images conditioned on class segmentation labels. We apply our approach to a new biomedical dataset of cortical axons imaged by two-photon microscopy, as a method of data augmentation for small datasets. We evaluate performance both qualitatively and quantitatively. Quantitative evaluation is performed by using image data generated by either CapsPix2Pix or pix2pix to train a U-net on a segmentation task, then testing on real microscopy data. Our method quantitatively performs as well as pix2pix, with an order of magnitude fewer parameters. Additionally, CapsPix2Pix is far more capable at synthesising images of different appearance, but the same underlying geometry. Finally, qualitative analysis of the features learned by CapsPix2Pix suggests that individual capsules capture diverse and often semantically meaningful groups of features, covering structures such as synapses, axons and noise.\n", "pdf": "/pdf/2afa8e6872e8e5effa0fe2c73977d1302653cf7a.pdf", "code of conduct": "I have read and accept the code of conduct.", "paperhash": "bass|image_synthesis_with_a_convolutional_capsule_generative_adversarial_network", "_bibtex": "@inproceedings{bass:MIDLFull2019a,\ntitle={Image Synthesis with a Convolutional Capsule Generative Adversarial Network},\nauthor={Bass, Cher and Dai, Tianhong and Billot, Benjamin and Arulkumaran, Kai and Creswell, Antonia and Clopath, Claudia and Paola, Vincenzo De and Bharath, Anil Anthony},\nbooktitle={International Conference on Medical Imaging with Deep Learning -- Full Paper Track},\naddress={London, United Kingdom},\nyear={2019},\nmonth={08--10 Jul},\nurl={https://openreview.net/forum?id=rJen0zC1lE},\nabstract={Machine learning for biomedical imaging often suffers from a lack of labelled training data. One solution is to use generative models to synthesise more data. To this end, we introduce CapsPix2Pix, which combines convolutional capsules with the pix2pix framework, to synthesise images conditioned on class segmentation labels. We apply our approach to a new biomedical dataset of cortical axons imaged by two-photon microscopy, as a method of data augmentation for small datasets. We evaluate performance both qualitatively and quantitatively. Quantitative evaluation is performed by using image data generated by either CapsPix2Pix or pix2pix to train a U-net on a segmentation task, then testing on real microscopy data. Our method quantitatively performs as well as pix2pix, with an order of magnitude fewer parameters. Additionally, CapsPix2Pix is far more capable at synthesising images of different appearance, but the same underlying geometry. Finally, qualitative analysis of the features learned by CapsPix2Pix suggests that individual capsules capture diverse and often semantically meaningful groups of features, covering structures such as synapses, axons and noise.\n},\n}"}, "submission_cdate": 1544704724304, "submission_tcdate": 1544704724304, "submission_tmdate": 1569282035701, "submission_ddate": null, "review_id": ["SyeHbrB1mE", "SJlhJBpdQN", "Hyeg9SjEmV"], "review_url": ["https://openreview.net/forum?id=rJen0zC1lE&noteId=SyeHbrB1mE", "https://openreview.net/forum?id=rJen0zC1lE&noteId=SJlhJBpdQN", "https://openreview.net/forum?id=rJen0zC1lE&noteId=Hyeg9SjEmV"], "review_cdate": [1547814141128, 1548436708504, 1548166535857], "review_tcdate": [1547814141128, 1548436708504, 1548166535857], "review_tmdate": [1550004095332, 1548856730572, 1548856716629], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["MIDL.io/2019/Conference/Paper46/AnonReviewer1"], ["MIDL.io/2019/Conference/Paper46/AnonReviewer2"], ["MIDL.io/2019/Conference/Paper46/AnonReviewer3"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["rJen0zC1lE", "rJen0zC1lE", "rJen0zC1lE"], "review_content": [{"pros": "Summary:\nThis paper presents a method for conditional image generation based on the Generative Adversarial Network (GAN) framework. In particular, the authors extend the Pix2Pix model by including convolutional capsule networks (CapsNets) in the generator network, and called it CapsPix2Pix. They used Pix2Pix, CapsPix2Pix and a physics-based model to generate images of two datasets conditioned on their segmentation labels that look indistinguishable from the real images. Subsequently, they trained a series of UNet segmentation networks on several real and generated images, and compared their performance on a common test set. \n\nClaims of the paper:\n1. It is possible to train a successful conditional GAN with a CapsNet-based generator. \n2. Pretraining the segmentation network (downstream task for evaluation) with generated images from CapsPix2Pix improves segmentation performance compared to no pretraining. \n3. Training the segmentation network from scratch with generated images from CapsPix2Pix improves segmentation performance compared to training with generated images from Pix2Pix. \n4. CapsPix2Pix generates a large variation of images compared to Pix2Pix.\n\nPros:\n* This is the first paper demonstrating that it is possible to perform conditional image generation using convolutional capsule networks in the generator of a GAN. They managed to generate 256x256 grayscale images.\n* The authors write an extensive description of hyper-parameter values and implementation details. Furthermore, they promise to publish their code and data very soon. Doing so will definitely help other researchers to adopt GANs and CapsNets in the future.\n* The paper is well written and easy to follow in general.\n* The authors include plenty of images that provide context and help the reader to understand the methods and results.\n", "cons": "Cons:\n* There is limited methodological novelty in this paper. The authors took an existing network architecture (SegCaps from LaLonde and Bagci 2018) and used it as a generator model in an existing conditional GAN framework (Pix2Pix from Isola et al. 2017). Notice that other authors have used CapsNets in the discriminator before, but not in the generator.\n * A significant part of the paper is devoted to explaining preexisting ideas such as GANs, CapsNet and dynamic routing.\n* There is limited validation regarding the application presented by the authors. In particular, I found that claims (2), (3) and (4) are not sufficiently supported by the evidence shown in the paper:\n * (2): when comparing pretraining UNet with CapsPix2Pix versus no pretraining it, they show a relative improvement of 0.76% only (0.6876 vs 0.6824 Dice), less than 1% difference. A test of statistical significance would be required to justify this claim (see next bullet point for more on statistical tests performed in this paper). At most, it could be said that both techniques achieve similar performance. Furthermore, figures A3 and A4 show indistinguishable performance at convergence.\n * (3): similarly as with (2), a small improvement between techniques is reported, inconclusive without a significance test.\n * (4): this claim is based on Figure A6 where only 1 example is provided. Since there is no page limit on the Appendix, more examples could be shown. Crucially, these examples should not be cherry-picked but selected at random (the authors do not mention how they chose the reported example).\n* Regarding statistical significance, the authors perform T-tests and provide p values. However, variation between performance metrics should not be measured across test samples (Table A1). Instead, the authors should repeat the training of UNet networks multiple times with different weight initialization, obtaining a series of performance measurements where the T-test is performed. For example, let\u2019s say the number of repetitions is 5, and we are interested in comparing PBAM-SSM with pix2pix-AR (first and second entries of Table 1). Then, 10 UNets should be trained, i.e. 5 networks for the first method and another 5 for the next, obtaining 2 series of 5 performance metrics (5 Dice scores per method, each one the average across test samples). Finally, significance would be studied by comparing these two populations with a T-test.\n* The qualitative results and analysis are difficult to follow given the variety of datasets, methods and metrics. A few changes could help the reader understand the paper faster:\n * Figure A1 could be part of the \u201cDataset\u201d section.\n * Include standard deviations in the table.\n * Be more explicit with Table 1 (use monospace font for better viewing):\n\t\t+--------+-------------+------------------+------+-----+----+\n\t\t| Labels | Images | Pretrained | Dice | ROC | PR |\n\t\t+--------+-------------+------------------+------+-----+----+\n\t\t| SSM | PBAM | No | \u2026 | \u2026 | \u2026 |\n\t\t+--------+-------------+------------------+------+-----+----+\n\t\t| SSM | Pix2Pix | No | \u2026 | \u2026 | \u2026 |\n\t\t+--------+-------------+------------------+------+-----+----+\n\t\t| SSM | CapsPix2Pix | No | \u2026 | \u2026 | \u2026 |\n\t\t+--------+-------------+------------------+------+-----+----+\n\t\t| Real | Real | No | \u2026 | \u2026 | \u2026 |\n\t\t+--------+-------------+------------------+------+-----+----+\n\t\t| Real | Pix2Pix | No | \u2026 | \u2026 | \u2026 |\n\t\t+--------+-------------+------------------+------+-----+----+\n\t\t| Real | CapsPix2Pix | No | \u2026 | \u2026 | \u2026 |\n\t\t+--------+-------------+------------------+------+-----+----+\n\t\t| Real | Real | Real-Pix2Pix | \u2026 | \u2026 | \u2026 |\n\t\t+--------+-------------+------------------+------+-----+----+\n\t\t| Real | Real | SSM-Pix2Pix | \u2026 | \u2026 | \u2026 |\n\t\t+--------+-------------+------------------+------+-----+----+\n\t\t| Real | Real | Real-CapsPix2Pix | \u2026 | \u2026 | \u2026 |\n\t\t+--------+-------------+------------------+------+-----+----+\n\t\t| Real | Real | SSM-CapsPix2Pix | \u2026 | \u2026 | \u2026 |\n\t\t+--------+-------------+------------------+------+-----+----+\n\t\t| SSM(*) | Pix2Pix | No | \u2026 | \u2026 | \u2026 |\n\t\t+--------+-------------+------------------+------+-----+----+\n\t\t| SSM(*) | CapsPix2Pix | No | \u2026 | \u2026 | \u2026 |\n\t\t+--------+-------------+------------------+------+-----+----+\n* According to Table 1, among the first 6 entries, it seems that training with real data always produces better performance than training with generated images. What are the consequences of such evidence? \n* The idea of pretraining the segmentation model with generated data appears without justification. Is there any hypothesis or intuition explaining why this could improve the performance? \n* The authors compare CapsPix2Pix and Pix2Pix in terms of the number of trainable parameters. However, CapsNets are historically slow and memory intensive. How do these two models compare in terms of GPU memory footprint (weights and activations) and training time (wall clock)?\n* How do you ensure that the generator does not generate images that look realistic to the discriminator but are not biologically plausible? What are the consequences of this potential behavior in the biomedical setting?\n* The caption of Figure 1 should say \u201cCapsPix2Pix generator architecture\u201d since the discriminator is not shown. \n* Is there any justification why the segmentation UNets are trained with 64x64 images whereas the generative models produce 256x256 images?\n\nMy acceptance rating is conditional to performing proper statistical analysis or a relaxation of the claims.\n\nEDIT-UPDATE: the authors have addressed all my concerns in their rebuttal, therefore, I confirm the \"accept\" rating.\n", "rating": "3: accept", "confidence": "3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}, {"pros": "This paper presents a convolutional capsule-based generative adversarial network, similar to pix2pix, that is applied to a simulated and a real microscopy dataset. Adding the synthetic examples generated by the model to train a segmentation network improves the performance of the segmentation model, with a performance improvement comparable to or better than that of the pix2pix network.\n\nI liked reading the paper. The various components are explained well and the approach is relatively easy to follow. The addition of capsules to the pix2pix seems to be a novel approach. The experiments look fairly solid (although I am not sure of the number of repetitions, see below).", "cons": "Section 2 notes that LaLonde and Bagci (2018) restricted the dynamic routing to small spatial neighbourhoods, but that the proposed method uses full dynamic routing instead. Does this restrict the size of the images that can be processed by the network?\n\nTo some extent, the proposed synthesis method provides a fancy way to do data augmentation or to add regularisation to the model. It might have been interesting to include one or more of these simpler methods in the comparison in Table 1.\n\nThe Discussion is relatively brief. Although the authors provide at the features extracted by the networks in Figure 2, there is not much more in the way of analysis or discussion of how the capsule networks are able to outperform the non-capsule baseline. Is it the fact that they are more efficient? (This isn't really measured in the paper, I think.) Is it because they learn fewer redundant features? There are some hints to the answers in the Abstract, but these things are harder to find in the paper itself.\n\nIt is not clear to me whether the results shown in, for example, Table 1, are based on multiple runs of the algorithms, or whether we are looking at the performance of a single run for each setting. (I obviously hope that we are looking at averages.)\n\nMinor point in Section 2.1: \"D is shown both real and synthetic label-latent pairs\". Shouldn't the discriminator also receive the real and synthetic images?\n", "rating": "4: strong accept", "confidence": "2: The reviewer is fairly confident that the evaluation is correct"}, {"pros": "The authors describe their approach CapsPix2Pix for synthesization of medical image data, that can be used as training data for machine learning. They reach state-of-the-art performance while reducing the number of network parameters by factor 7.\n\n- The paper is well written and gives a good overview of the issue.\n- They will release the synthesized dataset and their code to reproduce the results. #openscience\n- The authors do a good job explaining the background and related work. They provide a nice and clear overview on Capsule Networks.\n", "cons": "Abstract\n\nThe authors claim that \u201cThe field of biomedical imaging, among others, often suffers from a lack of labelled data.\u201d. This statement is not totally clear. In which aspects does the field suffer from labeled data? E.g. \u201cmachine learning for biomedical imaging suffers from a lack of labeled training data...)\u201d This is described well in the introduction, but could be made clearer in the abstract.\n\n\nThe authors compare features of pix2pix and their CapsPix2Pix approach in Fig. 2. It is not explained, what the presented features are supposed to demonstrate. How are the presented features of pix2pix selected? Please explain this figure better.\n\n\nIntroduction\n\n\u201cA way to resolve this is\u2026\u201d -> What are other ways to resolve this/ are there other approaches? E.g. how does synthesizing images compare to more traditional data augmentation as e.g. described by Ronneberger et. al.?\n \nBackground\n\nThe authors use the value function V described by Isola et al. They chose the weighting parameter \u03bb=1 instead of \u03bb=0.1. This choice should be explained!\n\n\n\u201cIn initial experiments, we found that standard convolutional discriminators (Radford et al., 2015) performed as well as convolutional capsule discriminators, and so opted to use the former.\u201d -> The authors should explain, how this was found? Please provide some information on the initial experiments and what makes you confident, that standard convolutional discriminators are sufficient.\n\n\nMethods\n\nThe role of the latent vector is explained in section 3.2. Please add a reference to Fig. 1 here. (p.6)\n\n\nThe authors describe their Discriminator very briefly. As they point out the effect of capsules in the Generator throughout the paper, it would be really interesting on why they chose DCGAN Discriminators.\n\n\nDatasets\n\nThe description on how the synthetic dataset is created is very sparse. The used methods are not explained or cited. It is not clear how the SSM or the PBAM works.\n\n\nExperiments and Results\n\nThe authors compare several training datasets for the U-Net in the Quantitative Analysis. Some information is missing here:\n\n\nWhat was the size of the used training datasets? (Same number of images in all training datasets?)\n\n\nWas some kind of cross validation performed? The test set of 20 images is rather small. How did you make sure, that the images represent the data distribution correctly.\n\n\nTable 1: Please provide the meaning of the Abbreviations in the caption.\n\n\nFigure 3: Please provide more information on what the red arrows are supposed to show/highlight.\n", "rating": "3: accept", "confidence": "2: The reviewer is fairly confident that the evaluation is correct", "oral_presentation": ["Consider for oral presentation"]}], "comment_id": ["SygUG_ViVE", "Hkxs2OVoNV", "HJe5I_EsVN", "ryx7SPEo44", "Hke75lhgH4"], "comment_cdate": [1549645838385, 1549646003091, 1549645906090, 1549645627179, 1550004362731], "comment_tcdate": [1549645838385, 1549646003091, 1549645906090, 1549645627179, 1550004362731], "comment_tmdate": [1555946005635, 1555946005375, 1555946000117, 1555945999906, 1555945962969], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["MIDL.io/2019/Conference/Paper46/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper46/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper46/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper46/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper46/AnonReviewer1", "MIDL.io/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Response to AnonReviewer1", "comment": "We thank the reviewer for their particularly detailed and helpful feedback, and will address specific sections below. All references are for our updated paper at https://github.com/CherBass/CapsPix2Pix/blob/master/CapsPix2Pix_paper.pdf .\n\nStatistical Analysis:\nThe reviewer correctly identified that the statistical analysis performed was incorrect. As requested, we have rerun the experiments (10x) with different initialisations in order to calculate the statistics correctly across populations. We now demonstrate that overall the best model pretrained on CapsPix2Pix data performs significantly better than training only on the original data, and than a pretrained model on pix2pix data. For details we refer the reviewer to section 5.2. Quantitative Analysis, and to updated Table 1 (Segmentation results), Table A1 (Per experiment breakdown) and Figure A4 (U-net test curves). In addition, we added many randomly picked examples of synthetic images generated from the same label- see Figure A6. To conclude we say that quantitative results from CapsPix2Pix and pix2pix are comparable, but CapsPix2Pix performs better qualitatively.\n\nPretraining:\nFine-tuning models has a long history in machine learning (specifically transfer learning). For neural networks, features learned from training on a similar dataset can help generalisation by providing an initialisation that is less subject to the distribution shift from training to test images. Empirically, our results indicate that training from scratch on real images is better than training from scratch on purely synthetic data; however, for the end goal of reaching the best performance, we demonstrate a small but significant improvement when pretraining on synthetic images. \n\nClearly, the margin and usefulness of this approach to data augmentation (in this case by pretraining) is likely to be greater when i) there is sufficient appearance data to train a conditional Capspix2pix network ii) the quality of segmentation fails because of a lack of sufficient geometric variation in the training dataset. Under such situations, being able to increase samples of geometric variation and to be able to supply ground truth through SSMs would make a more significant difference. Additionally, by controlling geometric ground truth, we can systematically validate the performance of trained network - even without using the synthetic data for task training - beyond the availability of real data. This could be used, for example to test hypotheses on failure modes of the network.\n\nIn addition, we refer the reviewer to a prior work also using GAN synthetic images to pretrain a network, to improve classification performance for medical images: Frid-Adar, M., Klang, E., Amitai, M., Goldberger, J. and Greenspan, H., 2018. Synthetic data augmentation using GAN for improved liver lesion classification.\n\nCompute:\nWe have now added an additional section 7.1. Computational comparison, which compares parameters, activations and run times in training and testing. The memory footprint for activations is comparable, but training is ~5x slower and inference is ~13x slower for CapsPix2Pix.\n\nPlausibility:\nFor the microscopy dataset, all cases that we observed appear to be biologically plausible. The wider philosophical point raised by the reviewer is perhaps more applicable to situations where anatomy is less varied than we observe for the axon case.\nThe possibility of creating biologically unrealistic data does indeed exist if we were operating at macroscopic scale. But one can certainly use adversarial training to train a discriminator with coarsely labelled data (e.g. this is a head MRI scan - no segmentation labels) to monitor any generative model to ensure that its images are appear to agree with the distribution of data for all known instances of that data type. The datasets used for such a process would not require segmentation labels.\n\nTraining Image Size:\nWe decided to train U-net on crops of real or synthetic data as it significantly reduces the training time. Also, since we generate images of size 256x256 and not 512x512, we would have had to crop the real data for a comparable test. We note that we trained U-net with the same parameters, and created crops in the same way from all datasets to make sure that the results are comparable.\n\nOther:\nWe thank the reviewer for the revised format for Table 1, and have performed other edits suggested as well.\n\nFinally, we would like to emphasise that the reviewer\u2019s condition for acceptance has been fully addressed (as is detailed above in response to comments): \u201cMy acceptance rating is conditional to performing proper statistical analysis or a relaxation of the claims\u201d. \n"}, {"title": "Response to AnonReviewer3", "comment": "We thank the reviewer for their helpful feedback, and will address specific sections below. All references are for our updated paper at https://github.com/CherBass/CapsPix2Pix/blob/master/CapsPix2Pix_paper.pdf .\n\nAbstract:\nWe have updated the abstract as per your suggestion. The first sentence now reads \u201cMachine learning for biomedical imaging often suffers from a lack of labelled training data.\u201d\n\nFeatures:\nThe features/activations are visualisations of how the networks process inputs. An additional figure with all pix2pix features has now been added to the appendix - see Figure A7. We note that we attempted to select the most varied features for Figure 2.\n\nAugmentation:\nWe perform standard augmentation techniques on real data in order to train the U-nets - see Section 4. Datasets for details of augmentation. See Table 1 (real data-AR) for results. We find that training on augmented real images leads to the best performance (if trained from scratch), but can be improved by pre-training on a synthetic dataset (CapsPix2Pix or pix2pix).\n\nInitial Experiments:\nWe added an additional section in the appendix Section 7.3. Additional training experiments for CapsPix2Pix. During these experiments we found that synthetic images were reasonable using \u03bb=0.1, but not as good as when using \u03bb=1. We also show our results with capsule discriminators, which did not work as well/took longer to train. See Figure A2 for synthetic image examples.\n\nDatasets:\nWe have now included more detail on the SSM and PBAM - please see Section 4\nExperiments:\nThe same number of images were used in all experiments- 26,400. We added this to section 4 and Table 1. We do a 80%, 20% (train, validation) random validation split in the training set for training U-net (not cross-validation). Since we used the data to train the generative models as well, we made a separate test set that has been kept aside to represent the data distribution by selecting, to the best of our ability, examples with a high/ low number of axons, different contrasts, noisy images etc. This ensures that the generative models have been trained on separate images, and would have never seen the images from the test set. We have now released the real dataset, with our training and test splits: https://zenodo.org/record/2559237#.XF2Mr1X7RhE .\n\nOther:\nWe have addressed the text edits suggested by the reviewer."}, {"title": "Response to AnonReviewer2", "comment": "We thank the reviewer for their helpful feedback, and will address specific sections below. All references are for our updated paper at https://github.com/CherBass/CapsPix2Pix/blob/master/CapsPix2Pix_paper.pdf .\n\nDynamic Routing:\nAfter a careful examination of the code from LaLonde and Bagci (2018), we believe that their local dynamic routing is the same as ours, and have updated the relevant sections in our paper accordingly. We would like to note that there is a specific ambiguity in the LaLonde paper, which we have e-mailed the authors about, and has now been clarified. \n\nAugmentation:\nWe already perform standard data augmentation techniques on the real data in order to train U-net. See section 4. Datasets for details of augmentation.\n\nDiscussion:\nThere are several advantages to using the convolutional capsule GAN over a non-capsule GAN. We list these below, and refer the reviewer to the relevant sections of the (updated) paper:\n- Smaller number of trainable parameters (x7 less) - see new Appendix section 7.1. Computational comparison.\n- Significantly increased performance over real data/ pix2pix when pre-training on CapsPix2Pix data. See Table 1, and section 5.2. Quantitative Analysis.\n- CapsPix2Pix has a latent space, and is can therefore produce interpolation figures, and more variable examples from the same images. See Figure 5, table 1, Figure A6, Figure A4, and Section 5.2. Quantitative Analysis on training on reduced number of unique labels. \n- CapsPix2Pix has more variable and less redundant features than pix2pix - see Figure 2, Figure A6 and Figure A7.\n- CapsPix2Pix is able to group similar features in the same capsule - see Figure A5.\nWe have expanded the discussion to include more detail on these improvements.\n\nStatistics:\nWe have previously displayed a single run for each setting, but have now updated the table to include averages and standard deviations from 10x runs per setting. We perform statistics to compare results across runs.\n\nDiscriminator:\nWe updated section 2.1 in methods to clarify the discriminator inputs. The discriminator receives real pairs (i.e. the image and the label), and as well fake (synthetic) pairs of images during training. "}, {"title": "Summary of changes to paper", "comment": "Summary of changes to paper:\nPlease refer to updated paper at https://github.com/CherBass/CapsPix2Pix/blob/master/CapsPix2Pix_paper.pdf \n\nMain text:\n-Convolutional Capsules & Dynamic Routing (3.1): After a careful examination of the code (https://github.com/lalonderodney/SegCaps) from LaLonde and Bagci (2018), we believe that their local dynamic routing is the same as ours, and have updated the relevant sections in our paper accordingly. We would like to note that there is a specific ambiguity in the LaLonde paper, which we have e-mailed the authors about, and has now been clarified. \n-Dataset (Section 4.) - more details have been added on how the PBAM is related to the shape model, the role of the PBAM plus the SSM as a baseline generative model.\n-Quantitative Analysis (Section 5.2) - updated to include additional experiments performed (x10 experiments for each of the lines shown in Table 1, totalling 140 new U-Net training runs) and revised statistics. We have found that the standard deviations of performance of re-trained U-Nets are remarkably low, giving a high degree of confidence in the results.\n-Conclusion and Discussion (Section 6.) - more detailed discussion of results.\n\nAppendices:\n-New appendix section on Computational comparison (Section 7.1). This section compares CapsPix2Pix and pix2pix in terms of trainable parameters, weights, activation and run-time in training and testing.\n-New appendix section on Quantitative Metrics for Evaluating Generative Models (Section 7.2). Contains additional discussions of the suitability of quantitative metrics used to evaluate GANs, and log likelihood estimations using Parzen windows.\n-New appendix section on Additional Training Experiments For CapsPix2Pix (Section 7.3). This section contains further details on training experiments we tried including L1 lambda = 0.1, different discriminator networks (traditional and convolutional capsules), and using dropout as noise.\n\nNew figures:\n-A1: Kernel density estimation plot\n-A2: Additional training experiments\n-A7: Example of all 64 features of the last layer in pix2pix.\n\nUpdated figures:\n-5: We have added information in the caption about the red arrows.\n-Figures A3 and A4 have been reversed in order.\n-A3: ROC curve- we now plot single ROCs for all 10x experiments for each dataset\n-A4: The figure now contains test Dice scores with epochs, with the means + standard deviations across 10x experiments.\n-A6: Many more examples of synthetic images from the same label have been added. We note that the additional images were selected randomly, and not cherry-picked.\n\nUpdated tables:\n-1: Results table now displays averages + standard deviations across 10 experiments for each dataset.\n-A1: Table now contains per experiment values for selected datasets (same as before).\n"}, {"title": "The authors have addressed all my concerns in their rebuttal, therefore, I confirm the \"accept\" rating.", "comment": "The authors have done a very good job with their rebuttal and they have addressed all the concerns I mentioned in my review. I believe the current manuscript to be of much higher quality now. "}], "comment_replyto": ["SyeHbrB1mE", "Hyeg9SjEmV", "SJlhJBpdQN", "rJen0zC1lE", "SygUG_ViVE"], "comment_url": ["https://openreview.net/forum?id=rJen0zC1lE&noteId=SygUG_ViVE", "https://openreview.net/forum?id=rJen0zC1lE&noteId=Hkxs2OVoNV", "https://openreview.net/forum?id=rJen0zC1lE&noteId=HJe5I_EsVN", "https://openreview.net/forum?id=rJen0zC1lE&noteId=ryx7SPEo44", "https://openreview.net/forum?id=rJen0zC1lE&noteId=Hke75lhgH4"], "meta_review_cdate": 1551356579451, "meta_review_tcdate": 1551356579451, "meta_review_tmdate": 1551881975206, "meta_review_ddate ": null, "meta_review_title": "Acceptance Decision", "meta_review_metareview": "There is a consensus among the reviewers that this is a good and useful work, and I concur with this. This work seems to be the first capsule-based conditional image generation -- it uses a capsule network in a GAN generator. The authors report a better performance than Pix2Pix, with the number of model parameters reduced significantly (by factor 7). This can be quite useful when the training data is limited, as is typical in medical image segmentation problems. The paper is well written, with a good overview of the subject. Furthermore, the authors promised to release the dataset/code for reproducibility. The only negative comment is, perhaps, the limited technical novelty. Essentially, the work is a combination of the SegCaps architecture from [LaLonde and Bagci 2018] and Pix2Pix from [Isola et al. 2017]. Having said that, the paper is a nice practical contribution to MIDL that will definitely make an excellent poster. ", "meta_review_readers": ["everyone"], "meta_review_writers": ["MIDL.io/2019/Conference"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=rJen0zC1lE&noteId=r1eiofUSLN"], "decision": "Accept"}