review_id,review,rating,decision midl19_1_1,""" This paper applies dropout to different UNet-based architectures during training to tackle the problem of missing modalities in the inference. The presented method was validated on the public BRATS dataset for multimodal glioma segmentation. + The paper is well motivated and clearly written; + The method is validated on one publicly available dataset; + The idea of this paper is straight-forward; + Studied the combination of dropout and three different network architectures. - Innovation of the paper is relatively limited. The utilization of the dropout technique and the three network architectures are not new in dealing with missing modalities and has already been used in studies published in MICCAI and TMI; - The paper is lack of discussion about existing works, as well as comparison with them. There are indeed some very good works towards addressing the issue of missing modalities; - The experimental validation is not comprehensive enough. Only the scenario of one modality missing was considered. The authors didnt report the performance when more modalities are missing. Also, as mentioned in the discussion section, the information from one MR modality may not be entirely removed in the late fusion network, which could affect the results. - Although dropout increases the network robustness to missing modalities, the network performance on full dataset decreases. - Since the ensemble and late fusion network are trained for each modality separately, do they cost four times more training time than the single UNet network? """,2,0 midl19_1_2,"""This paper deals with the issue of missing MRI modalities in multimodal glioma segmentation. This issue is more commonly addressed by replacing the imaging modality with e.g. the average of the remaining images in the dataset or by synthetically generating these modalities (e.g.Bowles et al. International Workshop on Simulation and Synthesis in Medical Imaging, 2016). The main concerns lie in the originality/novelty of this work and the evaluation. For more details see below: - The concept of dropout has long been used to improve generalisation of models and to tackle sparse inputs. - It is unclear how the label fusion model works exactly. Figure 2 diagrams could be improved to reflect what the models are actually doing. - I understand that the authors want to avoid data imputation of any kind, but it would be useful to see a comparison of the two approaches. If data imputation is performing significantly worse, then it is worth the extra effort of generating synthetic images or computing the missing modality from remaining data. - In terms of evaluation, results seem a bit unstable across folds so it would be useful to show results for all folds of the cross-validation. I understand the time constraints but it doesn't serve much to the user to only see results from 2 folds. - The boxplots in Figure 3 only favour the ensemble approach for the FLAIR images. The advantage of using ensemble or late fusion is not clear in this case.""",2,0 midl19_1_3,"""This paper deals with brain tumor segmentation on MR images with missing modalities. The paper presents a comparative study of different U-Net based architectures to deal with missing data, comprising a standard U-Net, a U-Net with dropout in the input layer, an ensemble approach and a late fusion approach. The paper is well written, easy to follow and validated in a well known dataset (BRATS). The authors tackle a challenging task that is still an open problem for the MIC community. The methodology is fairly simple and seems to have a significant impact in the results. - My main concern with this work is in terms of lack of mention and comparison with really similar approaches in the literature, like that of pseudo-url This paper tackles exactly the same problem and proposes a fairly similar strategy, but is not even mentioned in the manuscript. Authors should at least discuss the differences with this work. - The authors only validated the proposed approach for a single missing modality and for a binary segmentation scenario. Given that BRATS provides 4 modalities and mutli-label annotations, why not validating with more missing modalities and in the context of multi-label segmentation? This would make a more solid validation of the proposed architectures. Moreover, it would be interesting to see how the proposed methods perform in the absence of multiple modalities. """,3,0 midl19_2_1,"""The paper is relatively concise and clearly written. It describes and interesting methodology for handling image dataset of the same imagining modality and target, but with different annotations. The methodology is based on adversarial training of deep neural networks for segmentation. While the application domain (segmentation of different target in retina fundus images) is not terribly relevant as many of these applications are well-addressed, the wider context of the manuscript (dataset with disparate annotations) is relevant. The evaluation was not done properly or in a very transparent manner. For the vessel segmentation task, the author state that: SE, SP and ACC are reported as the maximum value over thresholds. This is poor practice and leads to overoptimistic estimates of the performance of the method. Performance measures such as specificity, sensitivity and accuracy are highly dependent on the selection of the operating point. The selection of the optimal operating point should be part of the model design and ideally selected based on a validation set. For the tasks related to the IDRiD challenge, the authors compare their performance to other methods submitted for the evaluation. However, the challenge seems to be closed for new submission and I assume that the performance for these tasks was evaluated by the authors themselves, and not by the challenge organisers. In my opinion, this disqualifies direct comparison with the official leaderboard of the challenge, unless the authors release the complete code based that reproduces the results described in the paper. Furthermore, the official leaderboard features results submitted by one of the authors of the paper. The results for the optic disc segmentation are similar as reported in the paper, however, all other results are significantly worse. Was this the same method as described in this paper? I""",2,1 midl19_2_2,"""- The authors present a deep learning method for fundus image analysis based on a fully convolutional neural network architecture trained with an adversarial loss. - The method allows to detect a series of relevant anatomical/pathological structures in fundus pictures (such as the retinal vessels, the optic disc, hemorrhages, microaneurysms and soft/hard exudates). This is important when processing these images, where anatomical and pathological structures usually share similar visual properties and lead to false positive detections (e.g. red lesions and vessels, or bright lesions and the optic disc). - The adversarial loss allows to leverage complementary data sets that do not have all the regions of interest segmented. Thus, it is not necessary to have all the classes annotated in all the images but to have the labels at least in some of them. - The contribution is original in the sense that complementing data sets is a really challenging task, difficult to address with current available solutions. The strategy proposed to tackle this issue is not novel as adversarial losses have been used before for image segmentations. However, it is the first time that it is applied for complementing data sets and have some interesting modifications that certainly ensures novelty in the proposal. - The paper is well written and organized, with minor details to address in this matter (see CONS). - The clear contribution of the article is, in my opinion, the ability to exploit complementary information from different data sets. Taking this into account, I would suggest the authors to incorporate at least one paragraph in Related works (Section 2) describing the current existing approaches to do that. - It is not clear from the explanation in Section 3.1 how the authors deal with the differences in resolution between DRIVE and IDRID data sets. It would be interesting to know that aspect, as it is crucial to allow the network to learn to ""transfer"" its own ability for detecting a new region from one data set to another. - The segmentation architecture does not use batch normalization. Is there a reason for not using it? - The vessel segmentation performance is evaluated on the DRIVE data set. Despite the fact that this set has been the standard for evaluating blood vessel segmentation algorithms since 2004, the resolution of the images is extremelly different from the current ones. There are other existing data sets such as HRF (pseudo-url), CHASEDB1 (pseudo-url) and DR HAGIS (pseudo-url) with higher resolution images that are more representative of current imaging devices. I would suggest to incorporate results on at least one of these data sets to better understand the behavior of the algorithm on these images. - The area under the ROC curve is not a proper metric for evaluating a vessel segmentation algorithm due to the class imbalance between the TP and TN classes (vessels vs. background ratio is around 12% in fundus pictures). I would suggest to include the F1-score and the area under the Precision/Recall curve, instead, which have been used already in other studies (see [1] and [2], for example, or Orlando et al. 2017 in the submitted draft). - The method in [2] should be included in the comparison of vessel segmentation algorithms. To the best of my knowledge, it has the highest performance in the DRIVE data set compared to several other techniques. It would also be interesting to analyze the differences in a qualitative way, as in Fig. 3 (b). The authors of [2] provided a website with all the results on the DRIVE database (pseudo-url), so their segmentations could be taken from there. - The results for vessel segmentation in IDRID images do not look as accurate as those in the DRIVE data set. However, since IDRID does not have vessel annotations, it is not possible to quantify the performance there. It would be interesting to simulate such an experiment by taking an additional data set with vessel annotations (e.g., some of those that I suggested before, HRF, CHASEDB1 or DR HAGIS) and evaluate the performance there, without using any of their images for training. That would be equivalent to assume that the new data set(s) does (do) not contain the annotations, and will allow to quantify the performance there. Since the HRF data set contains images from normal, glaucomatous and diabetic retinopathy patients, I would suggest to use that one. A similar experiment can be made using other data sets with red/bright lesions (e.g. e-ophtha, pseudo-url) or optic disc annotations (e.g. REFUGE database, pseudo-url). I think this is a key experiment, really necessary to validate if the method is performing well or not. I would certainly accept the paper is this experiment were included and the results were convincing. - It is not clear if the values for the existing methods in Table 2 correspond to the winning teams of the IDRID challenge. Please, clarify that point in the text. - The abstract should be improved. The first 10 lines contains too much wording for a statement that should be much easier to explain. I would suggest reorganizing these first line by following something like: (i) Despite the fact that there are several available data sets of fundus pictures, none of them contains labels for all the structures of interest for retinal image analysis, either anatomical or pathological. (ii) Learning to leverage the information of complementary data sets is a challenging task. (iii) Explanation of the method... [1] Zhao, Yitian, et al. ""Automated Vessel Segmentation Using Infinite Perimeter Active Contour Model with Hybrid Region Information with Application to Retinal Images."" IEEE Trans. Med. Imaging 34.9 (2015): 1797-1807. [2] Maninis, Kevis-Kokitsi, et al. ""Deep retinal image understanding."" International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, Cham, 2016.""",3,1 midl19_2_3,"""The paper presents a deep learning based approach to mitigate the problem of weakly labelled data in fundus dataset. The authors combine labels from different datasets and perform segmentation which are further discriminated between manually labeled vs automatic segmentation. In addition, they propose to add another discriminator which will provides the score for presence of different class in the datasets. 1) The paper gives a strong motivation towards bridging the gap between sparse availability of different class type annotations and requirement of full annotation of different class or at least the classes that are adherently present in that image/dataset 2) The paper proposes a novel way of guarenteeing semantic segmentation and learning from multiple datasets 3) Interesting approach 4) Well written paper with few typos to fix 1) Reference to abstract: The authors write that no semantic segmentation is used. However, I can see that they have done semantic segmentation and discriminators are used only to identify between truthfulness of the available manual vs segmented maps. Thus, in abstract, we use an adversarial learning approach over a semantic segmentation need to be justified. Does it mean that you are using a generative adversarial network where your generator is a semantic segmentation module which is rectified via discriminators decision? If so please include it to clarify more. 2) Page 2, second paragraph, One of the major: segmenation of all parts in fundus image may not be feasible especially when they have proliferative exudates present in them. So, please correct it with different classes that are adherently available.. 3) Please remove words that define functions, e.g., ChannelShuffler(), LeakyRelu() check for all those in the entire paper 4) In Table 2, what might be the region of hard exudates not performing well especially when we see at the results for other class types. Is it due to lack of ground truth labels in your data? There are other available datasets for these may be including those can improve this result? """,4,1 midl19_3_1,"""The paper describes a very nice approach for dealing with rotation invariant texture/feature detection in convolutional neural networks. From a classical point of view rotation invariant convolution layers can be obtained with filters that are isotropic. This is a rather limiting viewpoint and the current submission nicely extends the class of invariant CNN layers by defining a locally rotation invariant filter by a roto-translation lifting convolution (see work on group convolutions networks) directly followed by a max-pooling over rotations. Such layers are based on non-isotropic filters and can be efficiently implemented by defining convolution kernels in polar coordinates, relying on spherical harmonics. The theory is well explained and contains interesting details that are also valuable from a practical point of view. Although not validated on very large datasets (and with minimal architecture design), the experiments are carefully setup and convincingly demonstrate the potential of the proposed way of dealing with rotationally invariant feature/texture descriptors. My recommendation is to accept the submission. Overall I really enjoyed reading the paper. In this section I provide some minor comments and suggestions. Small fixes: Typo on page 2 first convolution layer, what exploits -> first convolution layer, that/which exploits I found the introduction of 2.4 confusing in the transition from h_i being a 1D function (from R to R ) to its voxelized version (from R ^3 R ) with the mention of isotropy constraint. I dont see this as a constrained as by definition = x is isotropic. What I think is important here is that the radial profile extends all the way to the corners of 3D kernel, and this then sets the number of trainable parameters in h_i (with unit voxel distance spacing). Perhaps this can be clarified? Figure 2 (which nicely illustrates the above) raises the following question: the last weights in the h_i vector only affect the corners of the 3D kernel and therefore some voxel features only appear at diagonal rotations, doesnt this affect the invariance you aim for? The shape of the kernel is not isotropic and therefore it is not truly rotation invariant. True rotation invariance could be achieved by limiting leq (c-1)/2 (so the corners are always zero). I really appreciated the short comment on the angular Nyquist frequency, these things are good to realize when working with the actual code. Page 6 regarding padding, this is zero padding? Minor comments: [Just a remark for a possible interesting extension (intro of section 2)] Regarding the action of SO(3) on R^3 and S^2:, for an example of efficient spherical harmonics implementations of SO(3) acting on axially symmetric functions on S^2 also see citation [2] below. After Eq.(1), you could possibly identify I*f(R as a lifting group-convolution (see e.g. Cohen et al. or [5] below), followed by sub-group pooling (max of rotations). Page 5 after Eq. 6. A nice property of expanding each filter in the same basis is that you can pre-filter the input whit your set of basis-functions separately and then combine the results with the corresponding coefficients to create all feature maps. The second to last paragraph of section 4 sounds contradicting and could be rewritten: the improvement of SH convolution is limited yet a significant increase in accuracy. Finally I would like to mention some very related work both on steerable filters and local rotation invariance which could be addressed in the submission. For work on (optimal) 3D steerable filters in texture analysis: See e.g. [1] (and references therein) for a recent overview and toolkit for 3D steerable image filtering. See e.g. [2] for (optimal) steerable filter construction/fitting for axially symmetric texture detection in 3D medical image data. In [2] additional axial symmetry is exploited to further reduce the number of SH coefficients and it makes rotation over your redundant. This essentially boils down to relying on Fourier transforms/irreducible representations on the sphere S^2 (<> quotient group SO(3)/SO(2)), see e.g. the book by Chirikjian and Kyatkin [3] and [4]. Following up on your discussion paragraph on page 8 regarding LRI in relation to the work in the group-CNN context by Cohen et al.: In addition to the already cited work by Weiler et al. 2017 the works described in [5] and [6] are very related to the current proposal in two ways. (1) In these papers the construction of local invariance is also studied by following lifting convolutions (creating feature maps in a higher-dimensional position-rotation space) by max-pooling over rotations. In these works this has been done with additional group convolution layers in between the lifting and rotation-pooling layers (creating local rotation invariance over the net receptive field size). (2) In [5] and Weiler et al. 2017 the authors describe a similar behavior as observed in Table 1: a higher angular resolution improves performance. A main advantage of your method is the very high efficiency in dealing with trainable parameters, however, it should be noted that neither in [5] nor the method of Weiler et al. the number of trainable parameters increases with angular resolution when only a lifting layer directly followed by a rotation pooling is considered. The work by Mallat et al is very much concerned with both local and global rotation invariances in image data (see e.g. the Ph.D. thesis by L. Sifre [7]). [1] Skibbe, H, and Reisert, M.. ""Spherical tensor algebra: a toolkit for 3d image processing."" Journal of Mathematical Imaging and Vision 58.3 (2017): 349-381. [2] Janssen, M, et al. ""Design and processing of invertible orientation scores of 3D images."" Journal of Mathematical Imaging and Vision 60.9 (2018): 1427-1458. [3] Kyatkin, A, and Chirikjian, G. Engineering applications of noncommutative harmonic analysis: with emphasis on rotation and motion groups. CRC press, 2000. [4] Duits, R, et al. ""Fourier Transform on the Homogeneous Space of 3D Positions and Orientations for Exact Solutions to Linear Parabolic and (Hypo-) Elliptic PDEs."" arXiv preprint arXiv:1811.00363 (2018). [5] Bekkers and Lafarge et al. Roto-Translation Covariant Convolutional Networks for Medical Image Analysis. In: MICCAI 2018 [6] Zhou, Yanzhao, et al. ""Oriented response networks."" Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on. IEEE, 2017. [7] Sifre, Laurent, and Stphane Mallat. Rigid-motion scattering for image classification. Diss. PhD thesis, Ph. D. thesis, 2014. """,4,1 midl19_3_2,"""Summary: In this work, a CNN architecture that has both local and global rotation invariances in introduced. Furthering the recent advancements in group CNNs for rotational invariance, the proposed work uses steerable filters based on spherical harmonics to obtain efficient sampling of 3D rotations. The model is evaluated on synthetic data and on lung nodule detection tasks. The performance is shown to be superior with a substantial reduction in the number of parameters when compared to CNNs. Pros: - Use of steerable filters to avoid approximating filter rotations and to introduce local rotation invariance is a solid contribution - Experiments clearly show the importance of introducing local rotation invariance for both the synthetic data and for the lung nodule detection task. That 3D CNN model outperforms in a couple of instances but with almost two orders of magnitude more parameters. - The paper is very well written; the discussion section is very insightful. Figure 1 is a great visual abstract of the work. Minor comments: - There appears to be a substantial increase in accuracy with increasing M for the synthetic data. A similar trend is also observed for the lung nodule classification data reported in Table 2. However, M = 96 is not reported here. A comment on why this is the case can be useful. - Literature survey could include one more closely related G-CNN work that also uses max pooling over different rotations, in what the authors call the projection layer [1] [1] Bekkers, Erik J., et al. ""Roto-translation covariant convolutional networks for medical image analysis."" International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, Cham, 2018. pseudo-url""",4,1 midl19_3_3,"""Authors tried locally rotation-invariant feature extraction for 3D-texture classification. This locally rotation-invariant feature extraction has essential role for classification of soft and non-rigid medical volumetric data. In the proposed method, authors designed locally rotation-invariant feature extraction by 3D steerable filter convolution and max pooling. This filter convolution is integrated to constitutional neural network (CNN). Using pre-designed kernel, that is, steerable filter, we can dramatically reduce the number of parameters of CNN, which should be learned in training procedure. So, this is computationally efficient even as data driven method. The proposed method sounds technically novel. In experiments, authors evaluated their proposed method by using phantom data and real clinical data by comparing with usual 3D CNN architecture. In the evaluation with phantom data, authors clarified the relation among the number of filters, the number of direction and classification results. In the evaluation with real clinical data, they demonstrated the superiority of the proposed method. Manuscript is well structured, and experimental results are convincing. Just my comments. A figure describing which filters have strong response to input 3D texture, as an example, is welcome for visual interpretation. """,4,1 midl19_4_1,"""The authors investigate a neural architecture search (NAS) framework for U-Net optimization in the setting of 3D medical image segmentation. To be more specific, what is subject to the automatic tuning is the precise arrangement of operations within the network ""cells"" (the sequence of pooling / concat / conv / skip connections within a layer); the high-level design of the encoder-decoder path is fixed (in line with that of a standard U-Net). | Strengths: - I find this line of research interesting. It has also objectively received a lot of recent interest in the ML community, with the potential to design new architectures that are both simple and with more expressive power than what has been proposed so far. - The related work seems to be adequately cited in parts 1 and 2. - The application to 3D image segmentation presents serious challenges in itself, and it indeed seems to have good novelty, although I am not an expert. The method is sound, in particular the use of the Gumbel-softmax trick is a good answer to the challenge of scaling to large architectures. | Weaknesses: - Key claims are made without basis. The interpretation of the experimental validation is distorted to suit the claims. - It is unclear how to interpret the (slightly) higher score of ""SCNAS (transfer)"" in Table 1; and whether it actually makes a case for SCNAS as stated in the paper. - There is a lot of repetition/verbosity in the first 6 pages. The proposed approach is reintroduced 3 times in similar terms. The respective focus of introduction vs. related work is unclear. The contributions of the paper are in turn less clear. Whether the contribution is specifically in the application to 3D medical image segmentation or also w.r.t. the methodology itself could be clearer. | Main comment: From the abstract to the experimental section, to the conclusion, the authors make a repeated claim w.r.t. performance that is contradicted by experimental validation: - Abstract: ""On the 3D medical image segmentation tasks with a benchmark dataset, an automatically designed 3D U-Net by the proposed NAS framework outperforms the previous human-designed 3D U-Net as well as the randomly designed 3D U-Net"" - Introduction: ""Experimental results [...] show that in comparison to the previous human-designed 3D U-Net, the network obtained by the proposed scalable NAS leads to better performances"" - Experiments/Results: ""Table 1 shows that the SCNAS produced better architectures than the (human-designed) 3D U-ResNet as well as the randomly designed 3D U-Net in terms of the overall performances on all three tasks."" - Conclusion: ""Empirical evaluation demonstrates that the automatically optimized network via the proposed NAS outperforms the manually designed 3D U-Net."" However, in Table 1. all the scores are within (plus or minus) a fraction of dice or a dice point (SCNAS transfer excluded, see comments below.). The paper would be stronger (i) without the contradiction between claims and results; and (ii) if the emphasis was shifted away from the (lack of) experimental evidence for improved performance of the NAS, to a more thorough empirical analysis of the auto-ML mechanism, with an open discussion. | Miscellaneous: - ""It is noted that unlike these architecture hyperparameter optimizations, we use the complete NAS to obtain the entire topology of the network architecture in this work."" Is that the case? My understanding is that the high-level architecture (U-Net) is fixed. The distinction is not a minor one in terms of outcome. The proposed approach optimizes over cell architectures, where a cell is e.g. an encoding unit in the encoder path. Same nature cells are further restricted to the same topology. Such optimization appears to yield rather intricate cell designs (cf. appendices) but does not significantly improve performance (Table 1.). - It is unclear how to interpret the (slightly) higher score of ""SCNAS (transfer)"" in Table 1. Are baselines trained on 20 (heart) / 32 (prostate) images, vs. ""SCNAS (transfer)"" being trained on 400+ images and fine-tuned on the relevant datasets? If so, what numbers are obtained for baselines when (pre)training and fine-tuning in a similar fashion? Right now, as SCNAS performs similarly to the baselines, with only the transferred architecture earning a couple of DSC points, a natural interpretation is that the experiments make a (limited) case for pretraining on a bigger dataset (rather than for autoML). - The paper could elaborate some more on Algorithm 2 (the sampling of two operations for computational reasons) and what the concrete effects of this compromise are. .""",2,0 midl19_4_2,"""The paper makes the case that existing NAS approaches, especially those that replace discrete variables with continuous approximations, are insufficient for very large architectures such as those used in 3D segmentation, and the authors select the widely used U-Net as their exemplary case. This makes a compelling case for the proposed method, which consists of drawing two entries from the probability vector of possible realizations (operations in this case) at a given iteration, clipping the remaining probabilities to zero and renormalizing, with the benefit of reducing the number of activations from N (8 in this work) to 2 for each operator that is part of the optimization process. While no quantitative analysis of computational savings is offered, it is clear that they are considerable. While the method certainly has merit, the experimental results do not do it justice. The segmentation performance obtained with SCNAS is below state-of-the-art and even compared to their own baseline (3D U-ResNet) the improvement is not convincing. The former could in part be explained by the missing data augmentation (the authors address this) while the latter could be the simple fact that the baselines can already cover a large enough function space to find a good approximation of the target function (given the limited amount of training data), so that the architecture search has negligible influence. I list my concerns in more detail below. Major: 1. The baseline models chosen for this study do not represent the state of the art. The reader can only be convinced of the merit of this paper if it can outperform the state of the art, which in the case for the heart and prostate dataset used here is nnUNet, the winning contribution to the Medical Segmentation Decathlon (Isensee et al., 2018). While Isensee et al. used an ensemble of different models for their final submission, their paper also reports five-fold cross-validation results with their 3D UNet (without ensembling) on these datasets. The 3D U-Net results reported by Isensee et al. are better than any of the results reported in this manuscript. In addition, the 3D U-Net used by Isensee et al. is very basic with no major architectural variations. As such, it could have been an ideal baseline candidate to demonstrate if SCNAS can really advance the state of the art. It should further be pointed out that, in general, the 3D UNet used in Isensee et al. is rather similar to the UNets used throughout this work with the sole big difference being the number of pooling operations ([3,3,3] in the case of this manuscript vs [5,5,5] (heart, brain tumor) and [2,5,5] (prostate) in Isensee et al., see Table 1). The statement We conjecture that Isensee et al. (2018) might be benefit from complicated pre-/post-procedures and thus obtained slightly better performances than the SCNAS. does not sufficiently explain the difference in performance. In fact, the preprocessing used by Isensee et al. for MRI images is basically identical to the procedure used in this paper. 2. In Equations 2 and 3, the authors state that the optimization of the edge weights is done on the validation set. It is unclear however to what set this actually refers to. In the experiments, the authors report results of a five-fold cross-validation. In order to draw any kind of meaningful conclusion of the this cross-validation results, it is important to clarify whether the validation set of the splits were used for this optimization or whether the training set was again split into two sets. If the performance of the models is estimated via cross-validation then the validation split cannot be used for any kind of optimization! 3. SCNAS produces a more generalizable neural architecture for the similar tasks of 3D MRI image segmentation . There is no evidence in the paper that would support this statement. The authors must also transfer their baseline models and see if the performance of the transferred baseline models is better or worse than that of SCNAS. 4. The authors state that SCNAS performs significantly better than the other approaches. Even if we ignore the previous concern, I would only buy that claim for the peripheral prostate zone. In all other categories the standard deviations (?, see also minor-8) are too large to support this statement without an actual test. Minor: The authors do not give sufficient details about the Random Search result. How was this network architecture obtained? How many different configurations were drawn randomly and how was the best model selected? Training large 3D segmentation networks is very computationally expensive. It is clear that the authors must have had access to a fairly large GPU cluster. If would be interesting to have more specific information about how many GPUs were used (in total and per model) and how long one of the models needed to train. How did the authors handle the different number of input channels when transferring their network from brain tumour (4 channels) to heart (1 channel) and prostate (2 channels)? The authors state that they compare SCNAS against a 3D U-ResNet and an attention U-Net, but no results are reported for the attention U-Net. the input images were first resized for all voxel spacings to be physically equal using the given meta-data. It is unclear what spacing the data was resampled to. Note that unlike Isensee et al. (2018), any heuristic pre-/post-processing techniques including data augmentation, network-cascade, and prediction-ensemble were not adopted in this evaluation to solely examine the effects by the use of NAS in designing the network architecture.: While it makes sense to drop ensembling and cascaded architectures for this work, the argumentation that the lack or data augmentation better isolated the effect of NAS is lackluster. In fact, including data augmentation would likely have improved the results somewhat across the board and thus made the results more convincing. Figure 2 a): in brain tumor segmentation it is more common to show contrast enhanced T1 sequences alongside with T2 or FLAIR to allow the reader to see all parts of the tumor properly. If only one of the sequences is shown then this should probably be the contrast enhanced T1 because enhancing tumor is not visible in the sequence presented here. What type of error is reported in table 1? """,2,0 midl19_4_3,"""This paper presented a neural architecture search to optimize the structure of each layer of a 3D U-net. Besides the methodology, the authors also provided stochastic sampling algorithm to find the optimal parameters. Through benchmarking, the proposed method showed superior results and compact output model compare to other methods. The experiment result is not strong, the link to in the paper leads to the competition website: pseudo-url. Most of the results posted was better than the results in the paper. 1. Over reference. In my opinion, citing a paper once at where the paper is first mentioned is enough). 2. I suggest add more explanation of the data and an picture example in the experiment section. 3. I would see the run time difference for each method as well. 4. Good to remind reader the evaluation metric again in table 1 """,3,0 midl19_5_1,"""- This paper proposes the first end-to-end solution to the problem of semantic segmentation of teeth from intra-oral 3D scans. This is an interesting application of deep learning to 3D point clouds, as opposed to the more traditionally encountered image-based segmentation. - The paper is clearly written. In particular I found the work to be strongly motivated in the introduction by a clear presentation of the specific properties and challenges of the addressed application. - The proposed method and evaluation seem sound. - The two contributions introduced in the paper are well validated, both against external baselines and individually through an ablation study. The evaluation confirms the positive impact of each contribution on the segmentation performance. - One of the two contributions, namely the addition of a discriminative network during training to structure the prediction, is insufficiently discussed in comparison to the previous works. The idea of using an adversarial training which differentiates realistic from unrealistic label configurations was already introduced in several works, starting (I believe) with Luc et al [a]. Multiple additional references for medical applications are for example available in the introduction and related work sections of the paper (Ghafoorian et al, 2018). However, although (Ghafoorian et al, 2018) is mentioned in the Methods section, none of these works are mentioned in the introduction or the related work, which I found to be misleading regarding the novelty of this contribution. - The adversarial network is based on simple features extracted from the predicted labels (mean and variance of 3D voxel positions for each class). While this is technically indeed a novelty, I find this aspect to go a bit against the end-to-end claim, since it amounts to handcrafting features in the label space before applying a multilayer perceptron. This contrasts for example with [a] where a network is trained end-to-end to learn how a realistic label prediction should look like, possibly discovering high-level criteria related for example to object shapes, etc. Even if using these features might be perfectly sound for this application, I find that this is not motivated enough in comparison to the existing end-to-end approaches (which also goes back to the previous point), and rather in contradiction with the narrative of the paper. The limitations of handcrafted features are indeed regularly pointed out in Related work. - The lack of universal coordinate system is mentioned as a challenge for this clinical application, for example in Discussion and conclusion. However, the features in the label space used to discriminate realistic from unrealistic labels include the means of 3D coordinates. Does not this possibly break the invariance with respect to the choice of coordinate system? I wonder whether using pairwise distances between classes, i.e. pairwise differences of means instead of the means directly, would be more suitable to guarantee an invariance to the choice of coordinates directly in the feature representation. [a] Luc et al, Semantic Segmentation using Adversarial Network, NIPS Workshop 2016 Minor comments: - If the extracted statistical features are sufficient, it might be worth training a simpler and more interpretable adversarial classifier than a multilayer perceptron maybe simply a logistic regression? This would give a better understanding on how the notion of realistic segmentation output is encoded. - I wonder if the discriminator could not be used at prediction time as well, in addition to structuring the training. Could it for example provide a confidence measure on the output? - In general, I believe it is better to avoid using an ArXiV reference when the work was published and peer-reviewed, as is for example the case with the PointCNN paper (Li et al 2018).""",3,1 midl19_5_2,"""Summary This paper proposes a new method on segmenting point cloud of intra-oral scans (IOS). This method contains three components: 1. Applying convolution neural networks (CNNs) on teeth semantic segmentation; 2. Proposing a non-uniform resampling strategy for better spatial learning; 3. Training loss combines with the auxiliary adversarial loss. The proposed method achieves very good performance. 1. The idea of this paper is not novel, segmentation network is based on PointCNN and dDiscriminator network is identical to the part of Point-Net. However, the author utilizes CNNs on teeth semantic segmentation, which is important. 2. The comparison with different method and settings are straightforward. 3. Applying non-uniform resampling strategy will generate different segmentation mask according to the chosen fovea point. How to get the final completed segmentation result should be specified. Minor: ----------- 1. Figure. 1 shows the model framework, but this figure is hard to follow, please explain it properly in the paper. 2. In equation (6), the author applies an adaptive weight between segmentation loss and adversarial loss. However, the network is not shared between segmentation and discriminative network. The weight may not be necessary here. No ablation experiment is done to identify how that affects performance.""",3,1 midl19_5_3,"""The paper applies point set classification on intra-oral scan of teeth. The proposed method includes a non-uniform resampling mechanism, and also a point-wise classification loss with an adversarial loss. The paper is basically well written. The challenges are discussed, and the contributions are summarized. The whole paper is easy to follow. The proposed method out-performed three state-of-the-art algorithms. Minor comments: the two losses are not shown very well in Figure 1""",3,1 midl19_6_1,"""This paper presents an interesting work on pathological image synthesis by developing adversarial learning models. The paper is well-written and organized for readers to follow. Experimental results show that the proposed model performs better than other baseline methods: conditional GAN and CycleGAN. While the authors show that the proposed method has better synthesized image quality than other baseline algorithms, the synthesized image quality is substantially worse than the original image (as shown in figure 4). Many of the healthy parts of the image either introduce artifacts, or with over-smoothed structures. In particular, artifacts that seem like check-board pattern appear in the pathological areas. Based on the experimental results, I am wondering whether this problem could be solved just by segmentation itself since reconstruction on pathological areas is not considered. It is trivial to adjust the contrast of the segmented pathology to better match the intensity distribution of healthiness. In this case, you shall perfectly preserve the image information on healthy parts. """,3,1 midl19_6_2,"""A well-written paper proposing an adversarial network for pseudo-healthy image synthesis by explicitly separating the healthy and pathology domain. Method has been compared to two baseline methods (CycleGan and conditional GAN) on two publicly available datasets with superior results evaluated using their newly proposed metrics for 'healthiness' (measuring size of predicted pathology) and for 'identity' (measuring similarity of non-pathological regions). This is very interesting approach to an important problem, particularly by the lack of desired image pairs (image with and without pathology of same patient) and potential wide number of applications. - Implicitly, their approach does a detection and segmentation of pathology. Why not evaluating their method on how well this part has been done, and not only on healthiness and identity? - how were both cycles (Figure 3) trained? Does the following order matter? - it is unclear if the method was trained and evaluated using the images as 3D or 2D slices. I suppose as 2D otherwise the number of patient data seem too limited - I suggest to have expert reader or radiologist to evaluate how well the healthy synthesis has succeeded """,3,1 midl19_6_3,"""The paper proposes a deep learning framework for pseudo healthy synthesis based on the factorization of pathological and anatomical information. The network is trained following two different settings namely, the paired and unpaired. To enable quantitative performance evaluation the healthiness and the identity metrics are proposed. The method has been validated on two different datasets and its performance has been compared to two baselines, the conditional GAN and the CycleGAN. This is an interesting work which fits well to the scope of the conference. The paper is well written and easy to follow. The contributions of the paper have been clearly defined. The presented work is of sufficient technical novelty and seems technically and theoretically sound. The references are adequate. The figures could be improved as it is explained below in detail. Suggestions for revision 1. In the 4th paragraph in Section 3.3, it is mentioned that a pathology mask for a real healthy image cannot be defined. It is not clear why a black mask can not be used in this case. 2. In the experimental results, why has the ISLES dataset been divided so unevenly into the training (22 volumes) and testing sets (6 volumes)? 3. Figures 2 and 4 are quite small, making it difficult to distinguish the differences between the subfigures. The size of these figures should be increased. """,4,1 midl19_7_1,"""1. The paper is well written with cleared method description and experimental settings. 2. Well-organised comparison studies. 3. The proposed method is novel. 1. The major problem I found for the experiment is that the undersampling pattern is less realistic. For example, the 2D Gaussian undersampling. 2. Some important and relevant studies are neglected, and should be considered to add into the references: Schlemper J. et al. (2018) Stochastic Deep Compressive Sensing for the Reconstruction of Diffusion Tensor Cardiac MRI. In: Medical Image Computing and Computer Assisted Intervention MICCAI 2018. (pp 295-303). Springer, Cham. Yu, Simiao, et al. ""Deep de-aliasing for fast compressive sensing MRI."" arXiv preprint arXiv:1705.07137 (2017). 3. The authors should stick on 'undersampling rate' or 'sampling rate', don't mix them to create some confusion. """,3,1 midl19_7_2,"""1. New hybrid cascade model for deep-learning-based magnetic resonance (MR) imaging reconstruction techniques 2. This architecture improvements were statistically signi cant (Wilcoxon signed-rank test, p < 0:05) 3. Visual assessment of the images reconstructed con rm that our method outputs images similar to the fully sampled reconstruction reference. There is no detail of evaluating pSNR. What's the PSNR of reference image? It would be better to present magnified images in Figure 3 and 5. """,3,1 midl19_7_3,"""- Well written, well referenced, very clearly written - Comprehensive comparison between many state-of-the-art algorithms - The paper contains many fruitful insights: firstly, it demonstrates -- via over three different architectures -- that the unrolled approach seems to outperform a single multi-scale architecture such as U-net. Secondly, the image domain reconstruction should be done before k-space reconstruction. Thirdly, the authors make the effort of understanding the unrolled architecture. - Lack of novelty: the only difference from KIKI-net is the fact that it is now trained end-to-end and the order is improved. However, this is interesting because in KIKI-net paper, they showed that the proposed order is better than IKIK-net. This may be attributed to end-to-end training? Please add further details here. - The paper is missing the detail about the number of parameters of each network. Because of this, I cannot make a fair comparison between the methods. In particular, how many convolution layers are used in each subnet & how many cascades for (a) KIKI-net and (b) Deep Cascade? For example, Hybrid net has 5 conv. layers for subnet, 6 cascades. I wonder if the parameters are matched? Please report them and redo the experiment by matching them. - Please report the SSIM value, the number of parameters and the speed of each method.""",3,1 midl19_8_1,"""This study analyzes a novel network architecture for the segmentation of objects with a lower dimension than the input data. In medical images this corresponds to hyperplanes in 3D images or lines from 2D images. Analysis of the network was performed in 2 use-cases: segmentation of retinal layers in OCT B-scans and segmentation of A-scans containing geographic atrophy from 2D bscans. The quality, clarity, and originality of this work is good. The paper is very well-written and very clear. In particular, Figure 1 is excellent in simultaneously describing three different network architectures in a concise manner; very well done! Figure 2 is also exceptionally done. The motivation for the work is clear and the results are described in two very relevant applications. I have not seen a similar network architecture and believe it to be unique. The network architecture is also very relevant to the task at hand with few arbitrary design decisions. It is unclear why the number of iterations was fixed rather than allowing to optimize on the validation network, but I commend including all data necessary to replicate the study. Overall, the network architecture is novel, the experimental design is mostly well-done. Nice paper. There are some claims made in the paper that are questionable. These claims do not affect the acceptance of the paper, but should be addressed prior to final publishing. ""Neither classification networks nor segmentation networks are suitable for these tasks [tasks being segmentation of 1D lines in 2D image]"". I understand the point that this narrative is trying to deliver, and believe that the narrative should be in there, but as written the text is untrue. In the narrow definition of classification and segmentation networks provided, this would be true. However, the definition provided misses a wide range of published networks that do not fit the criteria and are able to segment 1D lines from 2D data. In the paper, base model 1 is able to segment 1D lines from 2D images and is very similar to AlexNet. Examples of this have been published in the context of retinal layer segmentation as well: ""Shah et al. Multiple surface segmentation using convolution neural nets: application to retinal layer segmentation in OCT images"". In the Results section comparing algorithms for GA segmentation, there is a comparison of Dice scores compared across OCT-volumes in the dataset. A single Dice score was calculated per OCT volume. The proposed model had a mean and std dev of 0.49 +/- 0.21 while base model 1 had a mean and std dev of 0.46 +/- 0.22. The sample size, in number of volumes, is 20. The next sentence indicates that the proposed model is significantly better with a p-value < 0.01. I do not see how this can be true. I am not a statistic expert, but comparison of two algorithms on the same OCT volumes could use a paired student t-test. Since I do not have the paired data, an unpaired t-test gives a p-value of 0.66. It is possible that each B-scan, or even each A-scan, was used to calculate statistical significance, but that would not be the correct approach as well as the data within a single volume would be highly correlated. More information is required on how statistical significance was calculated.""",4,1 midl19_8_2,"""1. The method introduces the idea of funneling sub networks which is a novel way to deal with the dense segmentation problem 2. Experimental results are well validated 1. The description of subnetworks could have been more elaborate as I did not fully grasp its advantage and architecture nuances""",3,1 midl19_8_3,"""The paper proposes a novel CNN architecture for dense segmentation in reduced dimensions with applications to OCT images. The architecture contains a series of downsampling layers with residual connections in an encoder-decoder fashion with funneling subnetwork providing a global and local context for dense segmentation. The authors illustrate the use of the proposed architecture on segmentation of geographic atrophy and retinal layers. The results from the experiments indicate a significant improvement over the baseline methods. Pros: 1. Novel CNN architecture for boundary extraction in OCT images. 2. The method is evaluated for segmentation (GA) and regression (retinal layers) tasks. 3. The results from the experiment indicate a 3% and 21%(Dice) performance improvement over the two baseline approaches. The method also shows a similar superior performance for the other application (GA segmentation). 4. Figure 4. is helpful as it shows how the baseline works fail to segment retinal layers around the drusen. Minor comments: 1. The dataset for layer segmentation application contains 115Normal and 269 AMD samples. The training set includes only 5 Normal samples vs. 159 AMD samples. What is the reason for choosing the training set with this class imbalance? 2. The performance of the model (MSE) training with very few Normal samples has better performance compared to AMD samples. This performance difference could be highlighted in the discussion section. 3. It would be good if the authors could clarify the number of parameters for the proposed model and the two different baselines.""",3,1 midl19_9_1,"""1. The major contribution of this paper is that it evaluates a binary classification network for MS progression on a large clinical data set, which is quite useful for treatment planning. A minor contribution is that it incorporates uncertainty estimation for the classification result in order to remove highly uncertain results. 1. The methodological contribution is a bit limited. It uses a classical convolutional network for binary classification. 2. It seems that all the patients also have their EDSS scores. It would be interesting if some discussion/exploration could be added to show how regression compares to classification on this task, as that may give more information for the network to learn.""",2,1 midl19_9_2,""" - The paper discusses an issue of clinical importance and tackles a difficult problem in medical imaging - The dataset is large - The lesion masks are provided by consensus of two raters - Accuracy is reasonable given the difficulty of the task and the short time-frame - The paper ignores the extant literature specifically dedicated to MS clinical prognosis using machine learning on medical images. For example [Wottschel2015] use radiomics-style lesion features from T2 and PD fed to an SVM to predict CIS->CDMS conversion at 1 and 3 years (N=74). [Yoo2016] uses a CNN on lesion masks to predict CDMS conversion at 2 years (N=140). These or other specifically relevant papers should be mentioned and, ideally, one existing method should be tested to provide a competitive baseline. - The evaluation setup on the dataset described in section 4.1 gives cause for concern due to two major potential confounders. - The first potential issue (as in all multi-site studies) is that the outcome of interest (progression) may be confounded by the site. Even with harmonized acquisition protocols and sequences (this should be mentioned!), signal differences can persist and easily be picked up, especially by high-capacity models such as the one proposed. The authors should provide a table or statistical test on the equivalence of progression across sites. - The second potential issue is that visits seem to be considered independent of patients (e.g. first study '624 inputs' but '312 patients'). How is this handled in the cross-validation setup? Can subjects cross the fold? If they can, can the authors motivate why this is not an issue ? - The authors selected only patients that completed the trial. This raises the risk that the method presented will suffer from attrition bias, in particular if subject drop-out is related to the disease process - maybe people that start from higher EDSS and/or have more rapid progression drop out of the study more quickly ? This should be acknowledged as a limitation since it will impair real-world applicability of the method. - Figure 1 does a reasonable job describing the architecture of the model, but a few important details are unclear, in particular - How many convolutional kernels are used in each Conv3D of each ""parallel pathway""? - Related, how are the different modalities handled, presumably the same number of convolutional kernels for each modality? - Related, in table 2, does the 'lesion masks' version have the same number of parameters as the 'proposed 3D CNN'? Presumably not, but rounded off? This is confusing. - What is meant by 'an architecture analogous to VGG'? Presumably this means a series of conv-conv blocks with various channel depths, but it would be useful if the authors could summarize briefly the main differences with the original VGG19. TYPOS - 3.1: 'difference dimensions' -> 'different dimensions' - 4.2: 'receiver operation characteristic' -> 'receiver operating characteristic' REFERENCES Wottschel, V., Alexander, D., Kwok, P., et al.: Predicting outcome in clinically isolated syndrome using machine learning. NeuroImage: Clin. 7, 281287 (2015) Yoo et al., Deep Learning of Brain Lesion Patterns for Predicting Future Disease Activity in Patients with Early Symptoms of Multiple Sclerosis, LABELS 2016/DLMIA 201""",3,1 midl19_9_3,"""The study is to propose a deep learning framework for MS progression prediction based on baseline MR images. The study was based on a rigorous dataset and stringent design. Rigorous cross validation was conducted. By including the lesion maps, the prediction could be even improved. One of the pros is that the study features its novelty as the first application of prognosis study with a large sample of MS longitudinal data. And, it is not a simple application of DL algorithms but the DL framework they implemented was carefully designed (e.g., the usage of skip connections, and multiple depth of convolution operator is used to pass features from different levels with different spatial resolutions to the final output). In addition, in the end of the paper, the variability of the model was explicitly estimated and discussed. The paper is well written and well organized. The background and technical development of the same research group was found sound. Perhaps due to the space limitation in the conference paper, some important details are not mentioned. For example, how to integrate information from multimodal imaging is not described. And, which imaging modalities are more important in the prognosis task was not investigated and mentioned. Since the inter-rater reliability is not good for EDSS, did the same rater rate the patients at different time points? It is obvious that removing the most inconsistent samples could improve the result. The reviewer thinks that the model uncertainty estimation based on Monte Carlo simulation with dropout is reasonable, but the following comparisons (as shown in Fig. 3) do not mean model uncertainty. Instead, to run a better statistical analysis, the model should be trained with labels scrambled for multiple times to create a ""null hypothesis"", then, comparisons of model convergence and model accuracy distribution under the null hypothesis to the real application result could be carried out to make statistical influence. Minor: How to draw the ROC curves was not mentioned, based on varying uncertainty thresholds? All four modalities are registered to each other? Did the authors rule out center effect (different scanners, imaging centers, clinical trials, etc.)? Did the authors consider missing data? How to deal with different follow-up schedules for the two data sets (1 year with 2 scans, 2 years with 3 scans)? Future suggestions: 1. Unified work for segmentation and prediction. As segmentation could be another important task that DL can accomplish, the segmentation network should be integrated and combined with the current prediction network in a multiple task strategy. In this sense, manually labeled images will not be needed anymore. And the segmentation could potentially help prediction. Other clinical factors could also affect progression prediction, which could include some non-MRI or even non-lesion-related factors. 2. Attention network for identification of the important imaging features. The contribution of lesion label and why it helps the prognosis are not discussed very well. Although it could be included in an independent study in the future, the most contributing brain regions should be identified and discussed (maybe using attention neural network?) to further help clinicians. 3. Include more scans from different follow-up times to consider the baseline differences among samples. Also please include the baseline clinical scores in the model. Although the most intuitive idea of using this data set is to use baseline MRI data to predict changes in clinical scores. However, it's more proper to use changes in MRI (i.e., 4D image appearance changing trajectories) to do more accurate and reasonable prediction. 4. In the future, the current model should be compared with a model predicting the normalized delta(clinical scores), i.e., changes/baseline_level, as the final output. Different subjects have different baselines (Table 1), if only using baseline MRI scans to build the relationship to recovery, the factor of baseline performance could be a non-negligible influencing factor""",4,1 midl19_10_1,"""The work proposes a deep normative modeling framework based on Neural Processes to model variation of neuroimaging measures across individuals, with the goal of deriving biomarkers of psychiatric disorders. The proposal is an alternative to the use of Gaussian processes, that are very computationally expensive and rely on parametric kernels. - The work seems very interesting and with potential for clinical data. - The network architecture is explained well and related to the mathematical modelling (section 2.3). - Different diseases are tested for the novelty detection. - Results seem to be much better when dealing with ADHD data, and they even localize the region responsible for it in Figure 4. It would be interesting to see the results of figure 4 with the sMT-GPTR method too. - The data and the codes are publicly available. - The paper could benefit from a clearer writing and lighter explanations. - Sections 2.1 and 2.2 are difficult to follow. - One of the advantages mentioned for the use of NPs over GPs is the computational tractability. How do they compare in terms of computational cost? How does this cost change when you change the M? - How the data analysis is done is not clear to me. The number of subjects used for the training is very low compared to the amount you have. Why not using more? This should be clarified in the paper. """,3,1 midl19_10_2,"""0) Summary The manuscript proposes to use neural processes for normative modeling of fMRI data in order to perform classification of healthy, schizophrenia, attention deficit hyperactivity disorder and bipolar disorder. 1) Quality The manuscript presents a conceptually simple idea as it proposes to replace a forward regression model by a different one. 2) Clarity The paper is mostly well written and the notation is clear. 3) Originality The use of neural processes for normative modeling of fMRI data seems to have not been explored before. 4) Significance The model could potentially be a valuable tool. 5) Reproducibility As the experimental evaluation is based on an public dataset available from OpenNEURO and there is some code available for the neural process, the results should be reproducible in principle. 1) Quality One strange thing is the numbers reported in Figure 3 as compared to Figure 1 in [1]. Why does the sMT-GPTR(n,m) class perform much better here? Values of AUC=0.8 on SCHZ, AUC=0.7 for ADHD and AUC=0.85 for sMT-GPTR(5,3) are way better than the values reported in the manuscript. Also sMT-GPTR(5,3) does better than sMT-GPTR(10,5) there, which is not the case in the manuscript. Please explain! The relative merit of using a neural process versus a Gaussian process (GP) remains only partly explored as the claim that GPs scale poorly is not substantiated by runtime experiments. In particular given prior work on scaling up GPs e.g. [2]. The manuscript does not discuss whether or not the proposed methodology reveals interesting/relevant spatial pattern. 2) Clarity There are a couple of typos. - Section 2.2: ""In our application, in order to"" - Section 2.2: ""parametrized on an encoder"" - Section 2.2: ""In fact, in this setting"" - Section 2.3, Normative modeling: ""let ...$ to represent"" - Figure 2, caption: ""3d-covolution"" -> twice - Section 5: ""dropout technique in order to"" What is an ""amortized variational inference regime"" as mentioned in Section 2.2? The description of the stochastic process formalism in Section 2.2 seems like an overkill here. In particular, the statement about the number of subjects N going to infinity needs to be translated into the setting of N=250 where the actual model operates. 3) Originality 4) Significance 5) Reproducibility The code for the comparison of Figure 3 i.e. columns 1+2 is not made public. [1] Kia et al., Scalable Multi-Task Gaussian Process Tensor Regression for Normative Modeling of Structured Variation in Neuroimaging Data, pseudo-url [2] Kia et al., Normative Modeling of Neuroimaging Data using Scalable Multi-Task Gaussian Processes, pseudo-url""",2,1 midl19_10_3,""" - The paper introduces some interesting ideas of combining deep models (often using in medical image analysis nowadays) with GPs in an application area that can use more focus. - The clinical applicability and joint modelling between different domains (even just tackling this area is a plus) is nice - The mathematical development, although very dense (see below), is mostly well written and well defined - I think that addressing the comments below (and perhaps others reviews'), the paper can be presented at a future conference. - the figures and architecture are fairly clear, and most of the prose text is well written. - The paper essentially builds on two frameworks - Normative models (the authors' previous work) and Neural Processes (2018). The authors do not really spend time giving an overview of these models, while neither of them are widely known that they should assume the reader is familiar with them. It makes for a very difficult read. I tried to learn more by looking at the previous papers for these two frameworks for purposes of this review, but these should be summarized in the current paper - The mathematical development is dense and perhaps unnecessarily generalized (e.g. 2.2 development) -- while generality is certainly nice, in terms of a fit with MIDL it feels like some more intuition could have been developed alongside the technical development, and a more clear focus on a finite neuroimaging dataset. - It seems like the authors build on NPs that seem to have overlap with latent variable models (VAEs, etc), but there are not described or cited. It seems like an entire field is omitted, at least from discussion. - Building on the previous models, there are several works on deep latent models with GP priors that seem relevant, eg. Tran ICLR 2016, Casale NeurIPS 2018 (earlier on arxiv), and several others. Some of these are recent, so they shouldn't preclude the authors presenting this work, but some citations and discussion should be included, especially because the technical contribution seems important to the authors (rather than extreme/novel results) - There are several concepts introduced without clear explanation. Novelty detection (in this setting), GEVD, etc are all introduced and important in the results but not really well described. - The results are unfortunately not sufficiently convincing, with comparable behaviour to the authors' previous work. This is okay if we gain some new insight through a new method, but due to the aspects mentioned above, this is hard to obtain in this particular paper. """,2,1 midl19_11_1,"""The paper is of high quality, clarity, and originality. I have not seen a combination of activation maps, with a statistical shape prior and unets for segmentation before and I think it is a smart idea. The experiments highly support this idea. I'm not familiar with the task at hand so I can not judge on this. For the medical vision community, it seems to be significant for me since it helps to deal with few data points for complex problems. I'm not sure if I got it right that M-2DUnet is basically the same network but trained with manual delineations instead of the weakly-supervised approach with activation maps. In case that is correct it is actually nice and impressive that the Wilcoxon test does not show a significant difference for that case and I think it is worth to stress that more - even in the abstract and the conclusion! The paper is slightly above the page limit, this is mainly due to figures and table and I think it is adequate. The paper does not state to release source code and the data is not publicly available. Therefore it might be hard to reproduce. One experiment that could be interesting would be to compare the approach to using the activation maps directly instead of the tumor location - that way we would get an insight if the shape prior actually helps. It has some minor typos and should be proofread or put through grammarly (e.g. methods has been, software( Varian, Figure7) The figures have artifacts from a spellchecker. """,4,1 midl19_11_2,"""The main contribution of the paper: The authors attempt to create automatic training label based on class activation map, which showed comparable results to manual-labeled training dataset. Specifically, the authors use CNN-based classification + Grad-CAM (along with ASM + Dense CRF) to automatically generate training data as the input of a 2D U-net. (Post-processing includes: ASM and Grad-CAM). Its an interesting attempt to alleviate the requirement manual labeling (although the result might only be dataset-specific). Plus, it shows a feasible application of combined network design in the field of of ophthalmic MR imaging. The article is clearly written and structured. The presented figures well reflect the proposed framework as well as demonstrated the results with selected representative samples. - The activation map showed in Figure 2s pipeline clearly demonstrated that the tumor should be the region that differentiates groups. However, in Figure 4, the entire sclera region is also activated, which is significantly different from Figure 2. It is unclear where such difference comes from, since that should not be different in the sclera region between the normal and diseased eye, and need to be explained more clearly. - Generally, CAM can only be used to identify the differential region locations roughly, rather than delineating the segmentation even accurately (i.e. unsupervised CNN-based segmentation). The representing figure shown in Figure 4/7 indicate that the accuracy results depends heavily on the tissue contrast. That might indicate that the performance of the proposed methods maybe specific to the recruited dataset (e.g. more cases like shown in Fig 7 right-most column). - In Figure 4 (c), its hard to see the improvement of applying ASM over to the dense CRF. It would be better to show a more representative figure or quantitative analysis the Dice when comparing them with the manual segmentation. - In Figure 6: The author compared different segmentation approaches, and essentially showing that 2D network is better than 3D network, and 3D-CNN is better than 3D-Unet. I agree with the author that this should mainly be due to the small training set. The data augmentation with elastic deformation will help to alleviate the problem, which is used in both the original 2D U-net paper (by Ronneberger et al. 2015) and the 3D U-net (iek et al. 2016. However, based on the method part, the authors doesnt seems to use this data augmentation method. - Figure 7 mid-column, the author showed cases that their method (Grad-CAM+ASM+denseCRF) can correct the manual segmentation. o I suppose they mean the automatic result is better than the manual segmentation, as theyre not training their method based on the manual segmentation. o This indicate there are errors in the manual segmentation. Then its questionable to use such manual label as ground truth. In that case, multiple manual segmentation with inter-rater variability analysis might be needed to construct and validate the ground truth. Some minor issue that need proofread: - The figure seems to be screen captured without removing clean-up some software-based marks o Figure 2/3 has red dot line indicating word correction mark from word o Figure 7, the selection dots representing the selection window should be removed - Page 5: in section Refinement: o the word and should be put before k(fi,fj) o paragraph after equation (3): shouldnt use j as the subscript for wj, as j has already been used in the equation to represent the second pixel. - Page 6: in section Unnet: o effectually => effectively - Figure 6, an additional n is placed before Grad-CAM-2DUnet """,3,1 midl19_11_3,"""The paper is proposing a segmentation method for eye tumor segmentation from MRI. The proposed approach leverages CNN based architectures to create activation maps that subsequently refined the the use of ASM and CRF in order to create training data that subsequently were used as input to a UNET architecture. The paper includes an extended state of the art review. A weakly supervised approach is implemented. The dataset is limited. The authors should quantify the effect of the ASM and CRF steps to the final segmentation outcome. The False positive and True positive fractions should also be reported. Reporting Hausdorff distance should also be considered. Limited discussion/conclusion section. The authors should extend the section to compare the methodology proposed with ones existing in the literature and further analyze the technical innovations that make this approach superior to already proposed ones. """,3,1 midl19_12_1,"""The authors present an unsupervised approach that combines variational auto-encoders (VAE) to get a representation of the input image, with a convolutional neural network discriminator that evaluates obtained representations. Through this approach, the authors aim to face the following points: - The obtaining of an accurate latent space of all the images (embeddings) that allows for detecting the mechanisms-of-action (MOA) of the chemical used to treat cells. - A model that provides an accurate reconstruction of each image. The main difference between the work of Larsen et al 2016 and this one, is the definition of loss functions: Instead of integrating the loss function of a GAN into the VAE loss function, as done by Larsen et al, 2016, the loss of the VAE and the loss of the discriminator are combined in a way that they complement each other. With the proposed approach, the authors have obtained a good balance between both tasks: accurate detection of MOA (even if it does not outperform the results of Ando et al, 2017) and realistic reconstruction of images which are more accurate than the ones obtained when using GANs. Besides, the manuscript provides a precise and very updated review of state-of-the-art methods, which are taken into account in the proposed methodology. Most of the text is written in a clear way: the problem to solve is well illustrated, each of the procedures followed in this work is either described in detail or cited properly, and the results are exposed concisely. While both LVAE and LDi are defined, I miss a final complete expression in which it can be seen how both functions are combined. ""We conjecture that the reconstruction term in LVAE should not be discarded and that the additional losses LDi can be all used to compensate the limited reconstruction ability induced by LVAE, as opposed to the formulation of Larsen et al."" The method of Larsen et al. 2016 was evaluated in a different field (reconstruction of human faces) so, to prove the statement done by the authors, in future work, I would recommend them to compare both methods Section 3.4. It is not clear the number of convolutional layers used in each part of the network. Please, specify it either in the text or in Figure 1 so the method can be reproducible. For instance, it is written ""All three CNNs have four convolution layers"", did you mean ""All the CNNs""? or on the contrary, are you referring to the encoder, decoder and the discriminator? How many filters of size 5x5 do you have in each convolutional layer? Do you use zero padding? ""Images were randomly shuffled and presented to experts to assess whether each cell was real or synthetic."" Are the biologists told whether the cells are treated or not, and which is the treatment in each case? Would this affect their classification? References. Please, review all the references and make sure that all of them are correctly written: - Claire McQuin, Allen Goodman, Vasiliy Chernyshev, Lee Kamentsky, Beth A Cimini, Kyle W Karhohs, Minh Doan, Liya Ding, Susanne M Rafelski, Derek Thirstrup, and Others. --> ... et al. instead of and Others, - arXiv and bioRXiv references: specify the version you are referring to and indicate always the name of the journal or site (in most cases it is not said) """,3,1 midl19_12_2,"""The paper is tackling an interesting problem and I also share the belief that imaging cell variations holds potential to learn representations which are predictive of function. The motivation of this work is to have a method which is able to visually represent cells with high fidelity while also having a latent representation which captures information regarding impact of being treated with a compound. In section 4.3.2 the paper discusses studying the difference in reconstructions by the AE vs VAE. This is very interesting. These differences should be studied more ask they could provide insight into what is different about the models and what image features are captured given what the models are designed to capture. The method does not perform better given the literature on learning unsupervised representations that are predictive of the compounds used. The main baseline in this paper is ""the best reported GAN model"" and not the non-GAN methods by Singh and Ando which are SOTA for this task. This is potentially misleading. Given the motivation it is not clear why a new method VAE+ is proposed without sufficient evaluation of existing work. The most needed baseline is ALI (pseudo-url) which uses an adversarial loss to learn a latent space with a gaussian prior. Also, InfoGAN (pseudo-url) is another baseline to try and report results on. Also, the results from the previous GAN method are not compared in table 1. It is important to put these numbers side by side given the same evaluation. Also, the evaluation does not report variance. The evaluation should include a randomized train/valid/test split selection together with random model initializations. Given the evaluation now there is no guarantee that the VAE+ model improvements are significantly better. There is no way to compute a p-value. If the VAE+ model offers a significant improvement a better venue is ICLR/NeurIPS/ICML with evaluations on multiple datasets to confirm that the method works. """,2,1 midl19_12_3,"""1) proposed a VAE-based method to learn representations of cell images for cell profiling with adversarial similarity constraint and progressive training procedure, the proposed method explains more biological phenotype variations and achieved better performance compared to current methods based on generative models in the downstream task of classification; 2) modified the loss function of the original VAEGAN and applied adversarial loss at multiple layers of the discriminator for more realistic reconstruction results, the idea of progressive training is novel. 1) the authors compared the proposed method with AE and VAE, but VAEGAN, cited as Larsen et al. (2016) in the paper, is also a related method and should be compared with; 2) VAE was also evaluated in the work of cytoGAN, which achieved 49% NSC, VAE in this paper achieved 82.5% NSC, are network architectures, experimental settings etc different in this paper?""",3,1 midl19_13_1,"""The authors aimed to investigate the effect of incorporating attention modules to various CNN architectures for automatically grading knee radiographs based on the OA severity. Supervised training using clinically accepted KL-grade as a ground truth is combined with unsupervised attention module training proposed by Mader2018 to achieve the goal. Related work and attention module structure are explained in great detail; however, the experiments and results require better explanation and further refinement. pros: 1- Automatically grading the knee OA severity based on KL-grade will reduce the work load of radiologists and it could potentially enable automated OA progression measurements in clinics. 2- It was shown that attention modules can be inserted into various locations of the CNN architecture. 1- The study proposes to use attention modules to remove the need for localization of the knee joints prior to classification. It is mentioned that the need for knee joint localization step affects the quantification accuracy negatively and adds further complexity to the training process. Even though the attention modules remove the necessity for knee joint localization (proposed by the authors), it still adds further complexity to the modelling and training process compared to previous approaches (multi-loss, concatenation of features, where to locate attention modules and so on). Moreover, the results presented using attention modules are performing worse than the CNNs without attention modules ( ~ 6% lower accuracy in Table 2 vs Table 3, and as mentioned by the authors in the Conclusions.). This reduction in the accuracy needs to be properly investigated and the reasons should be identified in detail. These are the fundamental issues with this paper which needs to be addressed in detail. Some suggestions/required improvements: 1.a. We need to know if the attention module improves the accuracy or not. This could be done by comparing CNNs without attention modules (which accepts either the full knee image or localized knee joint images) and with attention modules in a systematic way. In the current manuscript, this improvement, if any, cannot be distinguished from other factors. 1.b. The authors should provide results for the Resnet-50 and VGG-16 architectures without attention modules. This information is missing in the current Tables 2 and 3. Because of this reason, the readers cannot be clear if the improvement over Antony et al.s models are due to attention modules or changes in the CNN architecture. 2- It is not clear if the data used in calculating the Kappa from 150 subjects from OAI dataset is in the test set of the original split or not. If these images were used for training or validation, this raises a question on the validity of the results presented in Table 3. 3- Further details are required for Section 3.3. I expect that the size of the fully connected (FC) layer after channel-wise concatenation will have an effect on the training. The size of the FC was not defined and its effect on the accuracy was not experimented. In addition, it was mentioned that several multi-branch combinations are tested in multi-loss training without giving details. It is not clear if this was achieved by some sort of a grid search or using a few empirical combinations. Please add required details to improve our understanding of the effect of attention branch locations and their corresponding weights to the loss function. 4- Dataset generation needs corrections/explanations: 4.a. It is mentioned that the OAI dataset has 4,476 participants at the baseline, but it has 4,796 subjects. 4.b. Training/validation/testing set are generated from images. Are there any specific reason for the authors not to generate these sets based on subjects? Is it possible that data splitting from images could add a bias to the results presented? 5- The manuscript has several typos, please fix them. For example: page 1: bony --> bone, page 5: focuse --> focus/focused, page 9: Table ?? --> Table 3.""",3,1 midl19_13_2,"""The paper is technically sound and propose an interesting approach to fuse two otherwise separate steps --localization and classification/regression -- that are necessary of knee OA severity assessment. Although the paper is too long, and it could have been shortened in some parts and sections, the authors included a lot of material to better position their findings (appendix etc). The structure of the paper, and especially the state of the art analysis is very good. The approach has some novelty to it. It uses multiple losses that are combined with weights chosen manually according to the rationale that deeper layers learn faster and overfit more. Using attention is not per-se new and the authors position their paper nicely with respect to previous approaches, but attention modules in this context have the potential of simplifying training and inference as well as improve results. The results that are presented are well related with state of the art results and are convincing. Some comparisons with other approaches have in fact been shown and crucial aspects of the algorithms and method explored. The result section is well structured and I have liked that the authors showed the performances of att0 att1 and att2 separately to give an idea of the behavior of these predictions heads. The results seem interesting and I agree with the authors when they say that this research brings a valuable contribution to the community. The paper seems not to respect the conference format which dictates a maximum number of pages that is smaller than the number of pages of this submission. The method of Tiulpin et al 2018 which uses a siamese network could have been explained better, what does it mean that they use symmetry of x-ray knee images. There are repetitions and long sentences that take up a lot of space without conveying anything strictly useful. Both introduction and experimental sections can be shortened without changing much of the meaning. The network architecture is not very clear. I would like to see a schematic representation of the network which in this moment seems to have prediction branches due to the presence of attention mechanism at different points in the network. A schematic example of this would clarify much of what actually happens in this method. Figure 2 clarifies something but it would be nice to see where are the prediction layers placed together with respective losses. In other words, the authors need to structure their method section, prioritize things they want to explain, give a panoramic view on their approach and then zoom in to the details about how they define their losses etc. Figure 1 is confusing because the image gets rotated and N (number of channel) shifts place with another axis. Would be better to keep it consistent. The results are not state of the art, although the method is much more difficult to implement and train due to the presence of the early fusion or multi losses (that require manually picked weights). """,3,1 midl19_13_3,"""1. Work proposes using attention modules after various layers in a CNN to predict the severity of the knee osteoarthiritis (OA). 2. The paper's text is well-written. Although the 'attention module' by itself is not a novel contribution (convolutions followed by activation has existed in literature before [1]), the combination of attention blocks from multiple resolutions using a multi-loss paradigm is novel. 3. A test of the module's adaptability to various architectures is interesting [1] Oktay et al. 'Attention U-Net:Learning Where to Look for the Pancreas'. In: MIDL 2018 1. Conclusions look very heuristic and the authors do not try to explain them. Ex: In table2 (Early fusion), why does att2 not feature in ResNet and VGG, but features in Anthony et al.s' version? Based on the architecture details, this might have something to do with the resolutions attended by these layers. 2. Results and Table 2 are presented in an unclear fashion and a redesign is strongly suggested. Ex: in pg. 7, text below fig. 3: Text says ""Best performance achieved with attention branches att0 and att1..."". However, Multi-Loss section in table 2 lists the performance numbers separately. Again, text in pg. 8 states ""... the VGG-16 attention branch att0, achieved the best classification performance..."". Is there no optimal combination of attention branches in Multi-loss? 3. Captions of figures and tables provide little information. Similarly, the take away from the captions of loss curves and activation maps in the appendices is not obvious.""",2,1 midl19_14_1,"""* Bone lesion image generation using VAE based generator trained by adversarial learning with cycle consistency. * Promising results by the data augmentation using the proposed method in lesion classification * Reasonable approach using transfer learning for femur and tibia cases. * Expert assessment or any quantitative analysis is required to evaluate the visual quality of the synthetic images, generated by the proposed method. * Lack of description of the generator model structure. * Lack of comparison with recent approaches in medical image generation * It is not clear how to choose the parameters, n, i and j, for blending. * It is not clear that the generator can synthesize various patterns of bone lesion including structural changes. Residual connection between encoder and decoder may help to generate more photo-realistic images, but the lesion patterns can be less diverse. """,3,1 midl19_14_2,"""This is a well-written, clear paper that uses cycle-GANs for image augmentation of bone lesion pathology. The experiments are thorough and convincing that their proposed approach improves upon the baseline. In their approach, they train a patch-augmenting tool that adds bone lesions to normal patches. These patches are blended back into their image of origin using alpha-blending, to ensure that changes to gross image features such as contrast are not noticeable. Then, a fraction of these images (the ones that score as the most 'hard-positive') are selected to augment the training set. This can be done on a bone-wise basis (e.g. for the humerus, tibia) or learned from other bones in the case where it is impossible to train a bone-specific translation model (femur). The best scores were obtained using the humerus translation model for inference and pseudo-labelling with the humerus baseline model, which shows that their approach was amenable to transfer learning. This has the potential to dramatically increase the effective size of the training set and hints at ways to train on conditions that are rarely seen. The paper overall has a high level of experimental conduct and is persuasive. In particular, I found the 'hard-positive' mining surprisingly effective and it was well-demonstrated to help. While the paper is well-explained in general, the precise application of the tool is not really clarified in the writing. Is it for building a bone-lesion classifier to be used in a general setting? If it is, it's not clear how training with this augmentation would interact when testing on a dataset that included a range of other pathologies or whether it would still improve on the baseline. I would also like to see some discussion of the merits of the proposed method for other applications. VAEs are known to produce blurry images, which may (or may not) make them suitable for x-ray problems and not others. It would also have been nice to see slightly more comparison with other augmentation or curriculum learning methods: the baseline is fairly crude (balanced sampling) and it's not clear if any more standard image augmentation techniques were used in the baseline evaluation.""",4,1 midl19_14_3,"""The paper proposes to solve data-imbalance problem which is one of the most fundamental problems in medical image analysis. It demonstrates good results with seemingly easy-to-reproduce method. Patch-based synthesizing following with blending to make a whole-image is also a creative approach suitable for the problem and data described. Some key definitions are not very clear - What is ""translation""? It seems to be a key concept of the paper, though it relies on references in other field for the readers to fully understand. Table 1 shows datasets for ""classification"" and ""translation"" task, but it's not quite clear what each are, what are the findings from it. - It would be more beneficial to describe and study the details of data augmentation using synthetic images. -- What were the initial data distribution and how were the synthetic images created to make the dataset balanced? -- In what case does it work well and which case doesn't? - Transfer learning part is too short, not clear to understand how it's performed to help solve the problem.""",3,1 midl19_15_1,"""To investigate whether a conditional mapping can be learned by a generative adversarial network to map CTP inputs to generated MR DWI that more clearly delineates hyperintense regions due to ischemic stroke. To perform image-to-image translation from multi-modal CT perfusion maps to di usion weighted MR outputs To make use of generated MR data inputs to perform ischemic stroke lesion segmentation. There is no detail on qualitatively visual comparison of generated MR to ground truth. The authors had better compare segmentation result between CTP with orginal MRI and CTP with CGAN MRI. The gain using CGAN MRI looks marginal, which would be better to apply ablation study. """,2,0 midl19_15_2,"""This work proposes an image-to-image translation approach for improving lesion segmentation, in the scenario when time and cost limitations allow for acquisition of only CT perfusion images. Overall, the article is well-motivated and clearly written. 1. This work applies a known technique (image-to-image translation using paired training data) to a new problem (CT perfusion to MRI for lesion segmentation), but the experiments demonstrate only marginal improvement, unfortunately. Thus, despite being a promising start, I believe that this work is not ready for publication at this stage. 2. As the mean values of all metric show only marginal improvements and qualitative results in Fig. 3 show some samples with much better results for the FCN-CGAN, it would seem that there are other cases for which segmentation with the FCN performs much better. Is this the case? If so, it would make sense to show some examples of this type and mention this as a limitation. Also, in this case, the sentence 'The results show that, in general, the FCN-CGAN model results in predictions that cover more of the ischemic core region...' might be misleading. 3. A benchmark experiment, where CT perfusion and real MR images are used for segmentation, should be added. Minor: 4. As the methods have been compared with several metrics, a discussion about how the different metrics compare with one another might be suitable. 5. The related work section requires some restructuring, in my opinion. Perhaps, the authors could consider a higher level of abstraction such as 'image-to-image translation for downstream tasks', 'image-to-image translation for data augmentation', etc. Also, details about some of the mentioned works that are perhaps irrelevant for the proposed work (e.g. 'Heavier weighting of the L1 loss around the border...') could be skipped. 6. A suggestion to the authors: perhaps optimizing the MR image generation such that it leads to good segmentation results might lead to improved segmentation results? 7. Do the authors mean to number Sec 4.4 and Sec 4.5 as Sec 4.3.1 and Sec 4.3.2 respectively?""",2,0 midl19_15_3,"""The paper describes a method employing conditional generative adversarial networks to aid the stroke lesion segmentation on CT perfusion images. The paper is well-written and clearly structured. The clinical application is well-motivated. The major problem is the presented results. With 94 pairs of data from 63 subjects, the statistical significance of the claimed improvement from Table 1 seems questionable. For instance, a difference of 0.06 in Hausdorff Distance, which is known for being with high variance, is unlikely to be significant given reported standard deviation being around ~20. This continues with other metrics, which suggested that the visually superior results in Figure 3 can be highly selective and therefore misleading. A few minor comments include: 1) fairly limited technical contribution to improve the results, e.g. why 3D network was not adopted and tested while 2D formulation maybe efficient but loses 3D ""convolutional constraint""; 2) no effort has been made to network adaptation, e.g. hyper-parameters from other unrelated applications, to the application - which itself may not be a problem and may be problematic within cross-validation. However, given the presented results, this became relevant and needs a better experiment strategy for future work.""",2,0 midl19_16_1,"""The paper presents an approach to aid interpretation of pathology images coming from confocal microscopes (CM images). The clinical value of CM images has been highlighted in previous work, but although effective towards the goal of detecting the presence of cancer, these images are hard to interpret by humans. The authors propose to use a cycle-GAN to shift the distribution of CM images towards more standard H&E images which are easier to interpret. They present an architecture making use of two network, a de-noise/de-speckle network (trained independently on one of the two types of CM images used in this work) followed by a generative network (cycle gan). The general organization of the paper is sound This paper tackles a problem that is relevant to the whole medical community. It has the potential to improve pathology and cancer diagnosis by making it simpler and quicker The results of this work look visually convincing. Both the de-speckle network and the GAN appear to deliver very good results, at least at first glance. The quantitative results delivered by the de-speckling images, which seem to be computed using simulated realization of random speckle noise, look also convincing. I agree with the authors statement in the end of the paper where they say they could train both GAN and de-speckle network end to end. I think this joint training might result in even better outcomes. The study has potential and could have interesting applications in clinical settings. One issue, from a purely organizational standpoint, is the fact that information about previous work is either omitted or scattered around the text. I understand that the available space is limited and therefore it's difficult to bring in the paper all the information that would be necessary, but the introduction should be extended to include previous work both in terms of DL and medical research. This paper still represent a niche application of a more general DL technique that has been already used for a large number of similar applications. The contribution is therefore incremental, building on top of well-known techniques. After the publication at MICCAI 2019 of the work ""Distribution Matching Losses Can Hallucinate Features in Medical Image Translation"" and similar other works, it has started becoming apparent that the simple visual similarity between samples generated by a GAN and true samples from a specific distribution doesn't ensure that diagnostic value is kept. This doesn't mean that cycle-GAN type of techniques are not suited for medical imaging since they might wipe out their diagnostic value, but it means that every study around this topic needs to prove that the diagnostic value is indeed kept! Unfortunately the authors didn't report indications in this sense in their paper. The main contribution of the paper is scarcely justified by the statement ""...they confirmed that the images were similar to those in routine"". I feel it would have been extremely interesting to evaluate the performance of those same clinicians (and others) diagnosing cancer using both H&E stained image and CM images of the same patient (or patient distributions) vs a control group. A lengthy study, I agree, but a necessity in light of other recent works highlighting how dangerous is to use GANs for this kind of tasks. The choice de-speckle network architecture is somewhat not sound, with the multiplicative residual connection near the outputs of the network and the median filtering operation. Is there some reference for multiplicative residual connections? How do we know that the network is learning 1/F (inverse of speckle noise)? Can we prove that at least visually? Is the math right? It is necessary to prove that the generated images retain their important diagnostic value. It is necessary to run a study to confirm that in a similar way that CM images were confirmed having diagnostic value and could therefore be used instead of H&E stained images.""",3,1 midl19_16_2,"""The authors combine DL and computer vision methods to digitally stain confocal microscopy images to generate H&E like images. The aim for this work is to provide an image that is familiar to the pathologists such that it will remove the need for specific training for CM interpretation. Pros: 1- If this approach is accepted by the community, it could remove the need for additional training to the pathologists. This will potentially bring us closer to rapid evaluation of lesions during surgical operation using fast CM. 2- Two step approach combining despeckling and generative networks are reasonable for the task. 3- Qualitative stained image results look promising Cons: 1- Median filter is used after the despeckling network, however it is not clear the added benefit of using median filter in despeckling process. Error measures presented in Table 1 needs to help readers to identify the benefit of the proposed neural network. The authors should validate their selection of two step approach (NN + filter) compared to an end-to-end FCN (with an additional loss like TV) for the despeckling network. 2- It is not clear why the histology images were used for denoising network training. Even though it is mentioned by the authors that these images resemble to noisy RCM, this should be either referenced or shown. 3- Please provide an evidence to support the positive effect of choosing an augmentation of size 512x512 after 50 epochs in Section 3.2. 4- The authors conclude that the despeckling NN is crucial to obtain realistic images, however, the results presented in Figures 8 and 9 do not provide enough information to support this conclusion. For example, it is not clear what are the non-desirable artifacts, where are the eliminated nuclei and why the network has a harder time to learn. The authors should provide support to these conclusions. For instance, Figure 9 needs to use the same images presented in Figure 8 to provide enough support for the need of despeckling network. In addition, images representing eliminated nuclei using noisy RCM images should be presented with their counterpart using despeckling network. 5- Obtaining quantitative comparison results for staining accuracy is not feasible due to the reasons clearly defined by the authors. It is necessary to provide more qualitative information regarding the staining results in addition to confirmation from two expert pathologists. Please provide results of the inter-rater reliability of two pathologists using a point scale on the quality of image digital staining. 6- I suggest the authors to use train validation and test split or a cross-validation, since the results presented here are from a validation set without a test set. This could potentially add a bias to the results presented here. """,3,1 midl19_16_3,"""Authors proposed a novel method to combine the fluorescence and reflectance confocal microscopy images in order to generate H&E like images. As the more general cohort of (dermato)pathologists are trained to read such images, the presented method will surely help in faster adoption of the technique to the clinical practice. The results look very compelling and more realistic compared to the available methods in the literature. Authors took the results of the state of the method as the initial solution and improved over their results. This is a very good estimation and resulted in very good results. -Novelty in terms of technique is limited. Authors took the Cyclegan idea and applied it to their problem. -computational complexity and the processing time must be presented. The SOTA method is very simple and works in real-time. How about the presented method. -With the SOTA DHE method, the FCM and RCM images are linearly mapped to color images. On the other hand, authors do not have such control over the GAN network meaning that anything artificially introduced in the images may not be realized by the readers, potentially resulting in ""wrong diagnosis"". Authors should also comment on this issue and potentially mention it as a shortcoming of the methods (unless they can propose a method to regularize the results). -The statement on main is a little misleading. As the method takes the SOTA DHE images as input and makes their staining more like H&E slides, the presented method is a stain normalization method rather than a staining method. -Training partition is not clear. Did the authors slide wise or tile wise partitioned the data? -input-Output pairs of both denoising and staining networks should be clearly stated. For example in Figure 4, what is L_cyc, L_adv, L_id. All these should be clearly defined and explained. -In the abstract, authors state that Ex-vivo CM can be used to identify tumors with overall sensitivity if 96.6% and specificity of 89.2%. Authors should clearly state the disease that they give the statistics for (e.g. BCC) -Authors need to give a reference at the end of the first paragraph on page 2 after; ""... complex surgical operations in skin cancer"" -In section 2.3.2, the total number of images does not add up. Authors state that their dataset consists of 8789 images. In section 3.1 they state that training is conducted on 7031 images and testing is conducted on 1748 images. -How did the authors obtain the grayscale images? Did they use color deconvolution to decompose the images into Hematoxin and Eosin channels? Which deconvolution method did they use? if they simply used RGB2YUV conversion, then what are the other steps that they used to make grayscale histology images appear similar to RCM images. -How do the authors obtain noise images? What are the parameters of the noise model? In general, the paper lacks of implementation and experimental design details. Authors must include these details in the final version of the paper. """,3,1 midl19_17_1,"""The paper introduces a new approach for deep learning-based reconstruction of spatio-temporal MR image sequences from undersampled k-space data. The novelty of the approach lies in the explicit use of motion information (displacements on a voxel level) during the joint reconstruction of all images of the dynamic sequence. It is assumed that this motion information provides useful temporal information and better exploits the dynamics of the underlying physiological process to finally improve reconstruction results. From a methodological point of view, the paper introduces a new objective function for the sequence reconstruction task that does not only include a standard image reconstruction term but also explicit motion estimation and compensation components. In this work, the objective function is minimized by using deep learning. The solution consists of three parts, which are sequentially applied to the results of the preceding part: (1) Initial image reconstruction using a recurrent network, (2) motion estimation using a FlowNet variant, (3) motion compensation by using a residual net. (2) and (3) are fused into one network. In the evaluation, the new approach is extensively compared to state-of-the-art reconstruction approaches in a simulation study based on cardiac image data. The results demonstrate the methods superior properties in terms of image quality and temporal coherence. General opinion: In my mind the approach presented in this paper is novel and interesting. One might argue that the approach is (at least partially) a combination of different pre-existing methods/papers (U-Net, Data sharing layer, FlowNet, ). However, I think all choices are reasonable and according to the results of the extensive and convincing evaluation, this combination leads to excellent results. Further comments: - I think the discussion of the state-of-the-art approaches most relevant to this work should be extended. In my mind, especially Schlemper et al., 2017 (why wasnt their 2018 TMI paper used instead?) should be discussed in much greater detail as it is the key competitor in the evaluation. In this context, it remains also unclear why only Schlemper et al., 2017 was used in the evaluation as a representative of deep learning-based reconstruction methods (why not Qin et al., 2018 and Huang et al., 2018). This choice should be discussed in the paper. - In Fig. 3, the location of the axis captions seems to be odd. They should be placed directly adjacent to their axis to improve readability. - What is so special about frame #9 that all approaches struggle to reconstruct this image? I assume it is one of the two extrema (end-diastolic phase or end-systolic phase) of the cardiac cycle. Couldnt this problem for MODRN be alleviated by choosing z_1=end-diastolic phase and z_T= end-systolic phase or vice versa? - Are the differences between MODRN and all other approaches statistically significant? Please provide results of statistical tests, if possible. - The original FlowNet paper should be cited. Pros: - Novel method for learning-based reconstruction of spatio-temporal MR image sequences - Explicit inclusion of motion information in the reconstruction process by using a FlowNet-like motion estimation approach - Extensive evaluation with very good quantitative and qualitative results Cons: - Reasons for the selection of competing state-of-the-art approach in the evaluation unclear - Paper is sometimes hard to follow""",3,1 midl19_17_2,"""The paper introduces a novel MEMC (motion estimation & compensation) refinement block to improve deep learning based dynamic reconstruction. The paper is a good contribution to the recon field as it opens up a new avenue of research for better understanding motion for the reconstruction. The idea is simple yet the result seems quite impressive. However, many details & comprehensive analyses are missing for one to appreciate the contribution of the proposed components. Overall, I feel that there are too many remaining questions for the paper to be published in its current form. However, conditioned on the fact that the concerns below will be addressed, I believe that the benefits can outweigh, making it acceptable for this conference. Although the authors introduce an interesting idea, my main criticism is the lack of comprehensive details. Completing these details will greatly improve the quality of the paper. 1. Reference: The paper is well-referenced for MR reconstruction & optical flow, however, similar motion modelling has already been considered in video super-resolution. For example: a. Caballero, Jose, et al. ""Real-time video super-resolution with spatio-temporal networks and motion compensation."" IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2017. b. Makansi, Osama, Eddy Ilg, and Thomas Brox. ""End-to-end learning of video super-resolution with motion compensation."" German Conference on Pattern Recognition. Springer, Cham, 2017. Indeed, while this paper is the first to apply MEMC framework for DL recon, it is not a new idea for general dynamic inverse problem settings. I would suggest the authors to acknowledge their work. 2. Network detail: while the overall architecture is well-described, there are many details that is lacking: for example, what are the detail & capacity of f_enc, f_dec and f_dec2? what are the resolution scale of these methods and how important is tuning these? How to balance beta and lambda in Eq. 6? Also, have you considered adding DC component at the end of motion compensation block? Wouldn't that further improve the result? 3. Training: Are the DRN and the MEMC components trained end-to-end? From what I gathered, these were trained separately. Is it possible to train both at the same time? 4. Choice of reference images for motion compensation: why z_1 and z_T and not the neighbouring frames? How sensitive is the network for selecting them? If we assumed that the CINE sequence is cyclic, wouldn't z_1 and z_T look similar? Isn't it better to consider, for example, end-systolic vs end-diastolic frames as a reference? 5. Experiment: (a) the manuscript gave me the impression that only the magnitude component is considered for the motion compensation part. For the experiments, were the complex images used for all components? (b) it seems that MEMC component applies in general. For example, it can be applied to DC-CNN, or DRN w/o recurrence too. I wonder what the performance would be for them. The key question is, how sensitive is MEMC component to the initial reconstruction? How much can MEMC component compensate for/affected by imperfect reconstruction? (c) it is more informative to see the motion profile of sample pixel(s), rather than the frame wise RMSE for Fig 3. In this way, we can understand if certain methods under/overestimate the motion. (d) it concerns me that the method is only evaluated on three subjects. Can cross-validation be performed? 6. MEMC Analysis: It seems that the difference between the performance of U-FlowNet-A & B is very small. I wonder if this is statistically significant? If not, I suggest the authors to remove this component, unless there is a good reason to.""",3,1 midl19_17_3,"""The paper presents a novel MR image reconstruction method that can exploit temporal redundancy in an elegant way. The authors propose to extract optical flow between consecutive time frames to gather complimentary information in reconstruction. The flow information is later used in compressed sensing to obtain the final result. The paper is overall well written and the approach is clearly motivated. The experimental results obtained on the clinical data demonstrate the benefits of temporal analysis. - Some literature on MR image reconstruction is missing: 1) Adversarial loss based DL-Recon approaches. Even if they are not used in benchmarking the proposed algorithm, it would be good to mention those in the introduction. 2) Similarly, dictionary learning & sparsity based techniques can be added as well. There were examples of such approaches to exploit spatio-temporal information in k-space data: ""Dictionary learning and time sparsity for dynamic MR data reconstruction"", TMI 2014. - Reference is required (page 2) ""Different from traditional motion estimation algorithms which may fail in low resolution and weak contrast"" - I am not sure that the model illustrated in Figure 2 actually optimises the equation (2). In other words, the connection between the equation (2) and (6-7) is not well defined. - MC(z_t, v_t) does actually depend on z_1 and z_T, this is not captured in the presented formulation. - Fig.3, why does the DC-CNN model performs significantly worse in the ES phase? Given that the DRN-wo-GRU model has no recurrent unit, I would expect DC-CNN (3D kernels and DC components) to display similar performance. How many consecutive frames are used as input in the DC-CNN model? Minor comments -- Could you specify/display what the y-axis correspond to in Figure 3? (RMSE)? . ""The number of iterations is set to be N = 3"" -> the number of recurrent (or RNN) iterations is set to be N=3. Please include the reference for structural similarity index measure (SSIM).""",3,1 midl19_18_1,"""They use deep learning in a new dataset and suggest a method to make use of unlabelled data. I do not think the contribution is strong enough for the a paper to be accepted in the conference. They use an old method (classify the centre pixel of a patch in the image) with the idea that it will use less memory and expand the data available since it will be as many samples as available voxels. However, this is unlikely the case. More modern approach like a 3D U-net ( iek et al. 2016) should fit in a modern consumer grade GPU like 1080Ti. The ""extra"" data will be highly redundant since neighboring pixels are basically the same patch, there therefore a lot of the operations during training will be redundant. It would be better do data augmentation (e.g. random shifts and rotations). If they are concerned about sample imbalance they could just weight the loss for each pixel according to the sample weight. They never describe they model properly. Is it exactly a LeNet network? What backend did they use to train it? Why they didn't use batch normalization? A supplementary material with more information will be needed. The ground truth confuse me. They compare with the BrainVISA model, but at the same time they use this model to extract candidate regions. If that the case, they are rather using a sort of ensemble of models. It would be impossible for the deep learning model to do worse than BrainVISA since the pixels are first classified by the later. I find problems in the statistics they use to claim an improvement in their model. They mention a p-value of 2.15e 26. With the numbers shown table 1 I would assume the only way to get those p-values is by having thousands of independent samples. Since they total labelled sample only consist in ~60 individuals I do not understand those statistics. Are they consider each individual sulci in each individual as a different sample? Are they doing multiple hypothesis correction? Since their model only considers positive or unknown labels (not the sulci identity) I think the correct approach will be to pool the results from all the voxels. If they want a standard deviation they should have use cross-validation. Additionally, the english and presentation need to be polished. The text is at times informal and unclear. The black highlightings at times do not indicate relevant information. The figure 2 seems that it might be important, but it is not clearly stated why. There titles that are abbreviations and those abbreviations that are never explained (e.g. ESI).""",1,0 midl19_18_2,"""A semi-supervised approach was proposed for training a convolutional neural network for automatic cortical sulci segmentation. The benefit of pretraining and regularization was shown. The training and evaluation approach is unclear. First it is stated that a leave-one-subject-out cross-validation scheme is used. Then a propagation of the training ground-truth labels to 10 other brains via Voronoi-diagram is mentioned for measuring error rates. The propagation method itself is insufficiently described. What errors are introduced by this propagation? And why is this propagation needed for evaluation instead of evaluation the performance on the manual labels from the test data? The method was only compared to the BrainVISA model, which is suboptimal, while stating in the conclusion ""shows the power of the CNNs compared to the methods developed so far"". The authors have previously proposed two other methods (Borne et al., 2018, Perrot et al., 2011), which performed better than BrainVISA. As the dataset was changed, performance cannot directly be compared with these previous methods. For the reader to know the benefit of the currently proposed method, authors should include a comparison on the same dataset with the state-of-the-art method (the better one of Borne et al., 2018 and Perrot et al., 2011). All used parameters (e.g. number of neighbours, BrainVISA configuration) need to be included to allow reproduction of the method. The evaluated method names should be stated in the text of 4.1 and in the methods section to ensure readers can follow what is compared. Why are the p-values when comparing BrainVISA to CNN+pretrain+reg (3% difference) larger or similar than when comparing BrainVISA to CNN (1% difference)? Also the p-values are very small for a difference of 1% and 62 test subjects. """,2,0 midl19_18_3,"""(this was done as an emergency review, and won't be as detailed as it could be) The paper is about cortical sulci segmentation, performed with CNNs. The problem and the specifics of this task are well explained and motivated. The method is performed in several steps. First, a neural network is trained on annotated data (62 patients). Then, inference is run on 500 un-annotated patients. Those predictions are then used to train a new network, which is then fine-tuned on the original patients. Since BrainVISA is extensively used to either select the voxels to labels or to regularize the results, the process can not be called end-to-end. For performances reasons, segmentation is not performed on the whole 3D volume, but on a list of voxels (with their neighbor patches) selected from BrainVISA. This divides the number of voxels to classify by 1000. The authors then used a modified LeNet for 3D to classify each voxel. Despite the impressive results of deep learning models in computer vision, these techniques have difficulty achieving such high performance in medical imaging. This is a really bold statement to start a paper, one which is objectively wrong. This might indicate a lack of awareness of the state-of-the-art by the authors. If you refer only to this specific task, this should be updated to reflect that. I am concerned about the use of BrainVISA to select which voxels should be classified, as it introduces the bias of this imperfect tool into the training process. On top of that, even a trained network will need it as a pre-processing to perform inference, which is not ideal. I am not even convinced this is really needed, especially with such a lightweight network ; GPUs made great progresses in recent years in memory/parallel capabilities. Training time with only 62 patients is usually not really a concern, we are not dealing with the millions of images found in natural images datasets. I would like to see a baseline of an end-to-end trained 3D-CNN, and then compare your method to it. It is mentioned that at each epoch, only 100 points are randomly selected per subject for training. Why ? Why not use all the data available ? Is this some weird kind of data augmentation ? The cross-entropy is actually not a great loss function for unbalanced tasks, as least in his unweighted version. There is also some other works on specific losses for unbalanced tasks, such as: - Sudre, Carole H., et al. ""Generalised Dice overlap as a deep learning loss function for highly unbalanced segmentations."" Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support. Springer, Cham, 2017. 240-248. - Milletari, Fausto, Nassir Navab, and Seyed-Ahmad Ahmadi. ""V-net: Fully convolutional neural networks for volumetric medical image segmentation."" 3D Vision (3DV), 2016 Fourth International Conference on. IEEE, 2016. V-Net could actually be a good baseline for this paper. The strategy used for the semi-supervision is usually referred as proposals. In this case, the proposals are refined using BrainVISA. Some related works that might be interesting to acknowledge and maybe compare to: - Rajchl, Martin, et al. ""Deepcut: Object segmentation from bounding box annotations using convolutional neural networks."" IEEE transactions on medical imaging 36.2 (2017): 674-683. - Papandreou, George, et al. ""Weakly-and semi-supervised learning of a deep convolutional network for semantic image segmentation."" Proceedings of the IEEE international conference on computer vision. 2015.""",2,0 midl19_18_4,"""This paper proposes a method to use a large amount of unlabeled dataset for the cortical sulci recognition. 1. It is not so clear whether the authors use the same architecture for the first pre-training and for the second fine-tuning network. If not, it should be clarified. 2. In 2.2.3, the pre-training model is trained after only 15 epochs. Is it enough? Was this model already converged? Also, does points mean patches or voxels? Its not clear. 3. In 3.1., the authors mentioned that four additional sulci were used compared to the previous paper. In previous paper, 63 and 62 sulci were used for left and right, respectively. In this paper, 64 and 63 sulci were used for left and right, respectively. How does this become four additional sulci? 4. In 4.1, the p values are strange. It seems that the final model shows the best results, but why is the p-value higher than the ones for the other methods? 5. In figure 2, what does the blue bar mean? According to the caption, there should be only violet and pink bars. Minor comments: 1. Voxel resolution should be mm^3, not mm. 2. Please represent the measures Elocal and ESI with subscript such as E_{local} and E_{SI}. """,2,0 midl19_19_1,"""The paper investigates the importance of explicitly using taxonomical structures of labels for classification tasks in medical imaging within the application of chest-x-ray classification into a number of hierarchically related diagnostic categories. The key contribution is the formulation of hierarchical multi-label classification within a deep-learning framework, which is a novel and generally useful idea. They authors propose an intuitive and effective two-stage optimisation scheme which first encourages the model to capture label taxonomy and then maximise the end classification accuracy. They introduce a numerically stable implementation of cross entropy loss for unconditional class probabilities. Using a large labelled chest x-ray data based, they provide a first demonstration of hierarchical multi-label classification in medical imaging. The paper is well written and well motivated. Results show improvement in classification accuracy over relevant baselines. Despite the new loss functions that accounts for the label hierarchy, a single distributed model is used to predict all the classes. This means that the architecture does not respect the hierarchical structure in the data e.g. different features may be desirable for detecting Abnormality, and discriminating between Pleural fibrosis and Fluid in pleural space. The improvements shown in Table 2 are relatively small and no error bars are provided. It would be interesting to see the benefits of capturing label taxonomy when the size of the training data is smaller. Injecting such domain prior knowledge may improve the data efficiency. It would be also interesting to see how the model performs in the presence of incomplete labels i.e. each image is not necessarily labeled until it reaches one of the leaf nodes (e.g. Abnormality=> Pulmonary => Opacity, but Infiltration or Major atelectasis is not known) . It would be more informative to reorder to the disease indices in the breadth-first order. This would help to see how the level within the taxonomy affects the performance. I also wonder what the plot would look like after the first optimisation phase based on the HLCP loss. Overall, the contribution of the paper is solid in terms of technical novelty and problem formulation. However, the paper could use stronger experiments as suggested earlier to bolster its claims. """,4,1 midl19_19_2,"""1. to present a deep hierarchical multi-label classi cation (HMLC) approach for CXR CAD. 2. to model conditional probability directly and with unconditional probabilities is key in boosting performance. 3. formulate a numerically stable cross-entropy loss function for unconditional probabilities 4. evaluate our approach on detecting 14 abnormality labels from the PLCO dataset, which comprises 198; 000 manually annotated CXRs. We report a mean area under the curve (AUC) of 0:887, the highest yet reported for this dataset. 1. There is no cross-validation, external validation, with a confidence interval for evaluating significant better method. """,3,1 midl19_19_3,"""The paper is well written and describes an interesting and relatively novel approach to solving multi-class classification in a clinical domain where overlap between classes is frequently a possibility. The approach is clearly explained and the results presented are sufficient to give merit to the idea. The authors could spend a little more effort on explaining the intuition behind conditional versus unconditional labels and the advantages of each. Only a single (large) dataset is used, while there are many publicly available datasets that could be included for additional experiments. No public implementation of the method is provided, which would be a nice extra""",3,1 midl19_20_1,"""- The authors propose a novel method to jointly perform the segmentation of six brain tissue classes and white matter lesions employing the hetero-modal MRI volumes available in different datasets. The proposal allows to combine datasets composed only by labelled T1 scans (usually related with subjects without anatomic lesions) together with datasets composed by T1 and FLAIR acquisitions (related with subjects with brain injuries) in which only the lesions are labeled. - Due to the nature of the clinical datasets, this kind of approach must be welcome. The need is clear and is becoming a hot topic in this field. In fact, another similar approach for the same problem has been presented for MIDL2019 pseudo-url - As the authors remark, they elegantly cope with a real problem in which three branches of machine learning: Multi-Task Learning, Domain Adaptation and Weakly Supervised Learning meet. - The model is tested with T1 and FLAIR volumes but should already work with more modalities. - As I mentioned above, a closely related paper have been submitted for the MIDL. I truly believe that the authors should compare themselves against pseudo-url denoting advatages and disadvantages. This could be really helpful in order to help the chairs to make a decision. - The sections 2.4 and 2.5 are halfway between proper mathematical justification of the employed tools and the purpose of using them, which sometimes makes the text difficult to understand (even taking into account that the concepts are not the simplest). Due to the recommended conference page limit, simpler sentences along with the maths could help with this issue. - The work employs the statistical formulation for the loss definition, cite the beautiful Kendall & Gal and Bragman jobs but at the end, employs the mode of the distribution as predictor. Could the authors go all the way and provide (in a near future) a whole probabilistic solution?. Besides, in my personal opinion, the method would be better understand it employing this kind of formulation. - The evaluation is OK but it would be more complete by adding a comparison (where is possible) with the traditional approaches (SPM, FSL, etc.) and specially, providing any measure of how the results are distributed (standard deviation, boxplot, etc.) - Could the authors comment on the possible effects of including more modalities? - The work is nice, could be talk material (and for sure will be part of Medical Imaging Analysis or a similar journal soon) but the authors have employed 11 pages. The text contains some unnecessary blank spaces, overdimensioned tables and figures which could be structured much more efficiently in order to save space. Summarize an interesting work is always difficult but is must be done in order to ease the work for the scientific community. For this reason, I cannot propose the paper for a talk.""",4,1 midl19_20_2,"""- The problem is highly significant - The paper is well written - Great contribution to the field of multi task learning. Mathematically grounded an elegant - As recognized by the authors, the Dice metric is sensitive to the size of the structures evaluated. It was maybe not the most appropriate - The authors should consider evaluating the number of detected lesions (together with positive predictive/negative value). While this seems ""much easier"" than the full extend of lesions, this is already a very useful information for clinical applications. - Why using a different class for brain stem? In some pathologies physicians are looking for brain stem lesions. Can lesions be in two different ""tissue"" classes?""",4,1 midl19_20_3,"""The paper presents a model for the joint learning of two segmentation tasks (brain tissue and lesion) from different datasets. The model uses an average operation to deal with the different number of input modalities. An upper bound on the expected loss for segmenting tissue is derived, which allows transferring information across tasks (i.e., from lesion to tissue segmentation), during training. Experiments on three datasets show the proposed model to offer comparable performance for both tasks, compared to task-specific models. pros: - Original and principled approach to deal with tasks for which input modalities may differ. - The proposed approach is motivated by a sound mathematical framework. - Experimental evaluation on three separate datasets. cons: - Results are not so convincing. The multi-task network performs significantly worse than single-task models, for both tissue and lesion segmentation. Table 2 shows improvements, however these are misleading since models were trained using different datasets (and MRBrains has only 7 training subjects). Given these results, it would be beneficial to clarify the benefits of the proposed model, compared to running single-tasks models separately. If the main advantage is runtime, than experimental results should be added to support this. Other comments: - While interesting the derivation of the upper-bound on R^t and its estimation is a bit long. In particular, going from Eq (5) to (7) is rather straightforward and may not deserve such length in the paper. I would have preferred this space used for a deeper experimental validation. - The average operation in the network allows dealing with a variable number of input modalities. However, it is unclear how this affects the information from different inputs. More specifically, I wonder if this forces the network to learn a ""common representation"" for T1 and FLAIR, which would make it less sensitive to when either one of these modalities is missing. How would the model perform if trained for a single task (lesion), with instances which can have missing modalities? Perhaps authors could comment on this in their paper. - Subsection ""Joint model versus fully-supervised model"" and Table 2 are hard to understand. It should be made clearer that the FS model is trained on MRBrainS18, whereas the proposed model is trained on WMH and Neuromorphics (ideally, this should be mentioned in the caption of Table 2). - p.9 : ""shwown"" --> ""shown""; Figure 3-a --> Figure 4-a ?""",3,1 midl19_21_1,"""- The authors present an end-to-end approach that allows to retrieve vessel trees straight from images, segmentations or skeletonizations, based on a patch-guided U-Net. - Different deep neural network architectures are explored on a series of artificial images generated with two alternative, novel hand-crafted approaches. The observations indicate that the best performing algorithm is the U-Net. - This final model is studied in the context of vasculature characterization in fundus images. This is an important step in several clinical studies that are focused on analyzing correlation between vascular characteristics and disease progression. - Two techniques for visualizing graphs are applied on the outputs of the networks. To the best of my knowledge, this is the first time that these algorithms are applied to visualize retinal vascular trees. - The paper is written in an excellent style. It is easy to follow, the explanations are simple and therefore straightforward, and the experimental setup is well designed. The reader can certainly follow each experiments step by step, and comprehensible understand the contribution of each components of the proposal. - Despite the fact that the deep learning contribution is not too significant, this method can certainly contribute to the field of ophthalmic image analysis, specially in clinical studies where the anatomical vessel properties are analyzed. - Authors refer to their approach as ""end-to-end image-to-tree"", but when evaluated on real images the results are not as good as when using segmentations or skeletonizations of the vessels as inputs. This is an important issue and I think that authors should take that result into consideration and modify the claims (and perhaps the title) accordingly. Provided that these modificiations are done, I believe that the article could be certainly accepted. Current performance of vessel segmentation algorithms is close to the one of human observers doing the task manually, so using a segmentation as input would not be really a problem. - It it not sufficiently emphasize that the methods for synthesizing vascular trees are novel and were not explored before. - The algorithm requires a starting point to extract the vascular graph. This position is by definition the central, top pixel in synthetic images. However, it is not clear which point is used when working on retinal images. This is also an important thing to consider. Using a single vessel from the optic disc is usually not enough, as some images might show more than one vessel spreading from this region. In [1-4], all the models solve the issue by taking root nodes in the intersection of the optic disc border and vessels. Did you follow a similar idea? - Is the model based on segmentations (not in skeletonizations) able to solve vessel crossings such as the one illustrated in Fig. 4 (c), bottom? Usually the skeletonization algorithms introduce a small piece of vessel there due to the overlap between vessels. If the proposed method is able to overcome that issue, then it might have really good implications in many applications, including blood flow simulation [1-4], where these ambiguities introduce false branching points that significantly affect the results. - Some other minor suggestions: --> It should be clarified in the introduction that Fraz et al. survey is focused only on retinal images and not in blood vessel segmentation in general. --> In lines 8 and 9 of the introduction, there is a repetition (""biomedical scans""). --> Although it is clear that the estimated vessel width is correlated with the manual annotations (Fig. 7 (b)), it would be interesting to complement those results with the R^2 value of a linear regression model and a Pearson correlation coefficient. References: [1] Liu, D., Wood, N. B., Xu, X. Y., Witt, N., Hughes, A. D., & Thom, S. A. (2009). Image-based blood flow simulation in the retinal circulation. In 4th European Conference of the International Federation for Medical and Biological Engineering (pp. 1963-1966). Springer, Berlin, Heidelberg. [2] Malek, J., Azar, A. T., Nasralli, B., Tekari, M., Kamoun, H., & Tourki, R. (2015). Computational analysis of blood flow in the retinal arteries and veins using fundus image. Computers & Mathematics with Applications, 69(2), 101-116. [3] Caliv, F., Leontidis, G., Chudzik, P., Hunter, A., Antiga, L., & Al-Diri, B. (2017). Hemodynamics in the retinal vasculature during the progression of diabetic retinopathy. Journal for modeling in Ophthalmology, 1(4), 6-15. [4] Orlando J.I., Barbosa Breda J., van Keer K., Blaschko M.B., Blanco P.J., Bulant C.A. (2018) Towards a Glaucoma Risk Index Based on Simulated Hemodynamics from Fundus Images. In: Frangi A., Schnabel J., Davatzikos C., Alberola-Lpez C., Fichtinger G. (eds) Medical Image Computing and Computer Assisted Intervention MICCAI 2018. MICCAI 2018. Lecture Notes in Computer Science, vol 11071. Springer, Cham""",2,0 midl19_21_2,"""- The authors compare RNN approaches with iterative CNN approaches, leading to the insight that for this task iterative application of a CNN performs much better than an RNN. - A synthetic vessel data set is generated to develop the method. This is a potentially useful contribution for development of such methods, but should also be put into context with similar existing works (e.g. pseudo-url). - I don't think the proposed method is an image-to-tree method. It actually performs an iterative segmentation of the voxels that make up the vessel centerlines in the image, i.e. deep learning-based region growing. The result is a segmentation mask which is in principle simlar to a thinned version of a segmentation of the retinal vessels. A similar result might be obtained by first obtaining a binary retinal vessel segmentation (for which many DL methods have been proposed, e.g. pseudo-url, pseudo-url) and then applying a conventional morphological thinning operation. The obtained segmentation does not contain information about the topology of the vessels (e.g. separation of veins and arteries, branching points, individual segments) that would facilitate more advanced tree analysis (e.g. pseudo-url). - I dont agree with the authors that this is an end-to-end method. The best performing method is found to be an approach in which a CNN iteratively provides a prediction of the most likely prediction to a tracker. This is not end-to-end, as the CNN is used many times to provide a prediction. - References to related work are missing. Vessel segmentation/tracking has a long history, see e.g. the review by Lesage et al. (pseudo-url), multi-orientation tracking by Friman et al. (pseudo-url), work by Bekkers et al. (pseudo-url). DL methods for vessel tracking include simultaneous orientation classification and radius prediction (Wolterink et al. pseudo-url) and LSTM-based methods (Poulin et al. pseudo-url). - The method is evaluated on the DRIVE data set, which is an old data set consisting of relatively small and old fundus images. It would be interesting to see how the method fares on larger images such as the High-Resolution Fundus (HRF) image data set or the REVIEW data set. In addition, as many vessels are visualized with 3D imaging it would be good to evaluate the method on a 3D data set, e.g. pseudo-url or pseudo-url). - It is unclear how the method deals with vessels running in parallel. Based on Fig. 4 and the description in the text, the method would be trained to jump from one vessel to the other. This would be highly undesirable when differentiating between e.g. arteries and veins. In fact, the example results in Fig. 10 show a lot of cyclic structures, which indicates that the tracker connects arteries and veins. - Additional constructive feedback: o Figs. 3 and 6 are a bit unconventional. It would have been nicer to show precision and recall in one plot (as you actually do in Fig. 3B) and use isolines to indicate Dice/F1 scores in those images. o The deep convolutional network (DCN) is not described anywhere. o There is no description or discussion of the results shown in Fig. 3, while these may actually be the main insight of the paper.""",1,0 midl19_21_3,"""Summary: In this work, a patch based approach to obtaining trees from image data is presented. Neural networks are used to train patch based predictors that predict nodes, which are then used to successively build trees of interest. Different neural networks are evaluated on synthetically generated data and U-net based patch predictor is finalised. This model is then evaluated on DRIVE data, comprising colour retinal images. Further, an updated U-net based regressor is used to predict vessel width. The preliminary evaluation presented is inconclusive as no relevant comparing methods are presented. Pros: - The primary motivation of the work is interesting: to go directly from images to trees instead of binary mask based segmentation - Use of neural networks to predict possible nodes in trees - The visualisation related work in Section 4 can be interesting. Perhaps it warrants a stand-alone short paper submission, as it does not blend fluently with the rest of the paper. - With the larger objective of going from image-to-tree, the presented evaluations appear incomplete. For instance, the evaluation metrics are computed on ""binary masks of tree generated"" (Sec 2.2). This would, in my opinion, contradict the primary objective of bypassing the binary segmentation step. While not straightforward, there are works on tree-space statistics that can be used to perform evaluations directly on trees (for example in [1]). This will considerably strengthen the work by aligning it with its primary objective. - In choosing the neural networks, it is mentioned that the sequence-less models work better than the sequence-based ones, without a discussion. One would hope that in a recurrent setting, there is more information for making improved node predictions. So, it is surprising. - This brings me to my next question: Instead of using sequence-less neural networks to predict individual nodes on small patches, why not train the networks like U-net to predict all possible nodes on the entire image? - Section 3.1 is ambiguous. It describes three levels of vascular tree construction in ""increasing order of difficulty"". Do the authors see each of these tasks as going from image-to-tree? Because the evaluations in Figure 6 seem to indicate this. Obtaining trees from retinal images is what is most interesting, and this seemed to be the motivation presented earlier. Given a segmentation map, obtaining a skeleton and then a tree from it is not as interesting. As a result, the comparisons presented in Figure 6 do not tell much. It is not surprising that the skeleton-to-tree is better, as the segmentation task is already solved. [1] pseudo-url""",2,0 midl19_22_1,"""The authors present a temporal convolutional neural network for the segmentation of surgical tasks and validate it against the JIGSAWS dataset. The problem they present is of relevance and of current active research. For instance, several challenges have been held in the past years to assess the quality of state of the art methods addressing the same problem. Being an active area of research, the field is quite rich in literature. For a conference paper, it is quite difficult to cover all the existing works. The authors have made a good effort to present a concise summary of relevant related work. The obtained results present an accuracy higher than other methods from the state of the art. - Clarity: ** The authors do not deliver very well the message. After reading the paper, I find it difficult to establish what exactly is their contribution. From my understanding, they have used previously proposed network architectures for the task they aim to solve being their main contribution to add skip connection to an encoder/decoder architecture previously proposed by Lea et al, 2016. The authors should try to make this quite clear in the paper. ** The authors claim that a second contribution is to add a parallel convolution layer (Fig 3) to the modified ED-TCN network (Figure 2). However, I have the impression that these two networks are nearly equivalent. The sole difference is an extra parallel convolution layer that starts at layer 2 and connects at the output of the encoder. ** Could you please explain why you chose the filter sizes as such? It seems that you have replicated the architecture proposed by Lea et al. While this is a valid choice, it is important to justify why the very same network architecture works well for your problem. ** In general, the methods should be better explained as to try to justify the different methodological choices (e.g. why you need to add skip layers, why the ED-TCN was not changed at all or if experiments proof it works well as such, how is the kinematic data combined with the video data frames?). This would make the paper more clear. ** The paper has numerous errors in the use of the English language. I recommend to have a careful review of it and/or have it proof read by a native speaker. Some common mistakes I have found: 1) Using ""an"" before a word starting with h. It should be ""a"". 2) Wrong use of verb tenses. Many times the authors use the tense for the third singular person when the noun is plural or the opposite. Examples: ""which would allows"", ""Measurements from the dataset includes"", and ""but it also increase"" (second case). 3) Using the singular form of a noun when the plural should be used. Examples: ""surgical task"", ""autonomous vehicle"", ""field"", ""block"", ""layer"", among others. 4) There multiple cases where the wrong indefinite or define article is used or it is missing. 5) results are not as high -> Good or accurate should be preferred. ** The images from Figure 4 have been taken from the paper of Ahmidi et al, TBME 2017. Please give the right credits. - Quality of the evaluation: ** Given that the benchmark from Ahmidi et al uses the very same dataset, one would expect that the authors use the same setup and metrics used there. Is there any particular reason why the authors decided to exclude some of the metrics and experimental setup (leave-one-super-trial-out)? ** Table 3: The work from Lea et al 2016 (ED-TCN) has no reported accuracies. Is this an error? -Originality: As previously mentioned, it is difficult to establish the original contributions of this paper. Currently, I consider its contributions are mainly incremental as it reuses state of the art work. I encourage the authors to re-structure their paper so that one can easily assess their unique contributions. """,2,0 midl19_22_2,""" Summary: The authors present an approach for surgical activity segmentation using fully convolutional neural networks (FCNN), i.e. an hourglass architecture i) in its vanilla form, ii) with direct skip connections from the down-sampling to the up-sampling path, and iii) skip connections with incorporated convolution+pooling+normalization blocks. The paper is written clearly. Methods, materials and validation are of a sufficient quality. There are certain original aspects in this work (hourglass-networks with skip connections, once direct and once with additional convolution operations), but overall, the novelty is limited. The evaluation is performed on the publicly available JHU-ISI JIGSAWS dataset, for which competitive methods and results are available. Pros: - Good overview of related literature on action segmentation from kinematic data - Validation on JIGSAWS data is well comparable to other methods in literature. - Comparison of several hourglass architectures and kinematics representations in the experiments. Remaining questions / clarity: - While HMMs and RNN/LSTMs are designed to handle temporal sequences of varying length T, a FCNN architecture as proposed here requires a fixed-length input. Authors try different lengths in this work (10/20/.../50), but it is not clear 1) what the unit of this temporal window length is (10 seconds, or 10 samples of 76D kinematic vectors), 2) if 10 means ""10 samples"", at what framerate were kinematics recorded, and how many seconds of kinematic data are covered by 10/20/.../50 samples?, 3) whether the inference was performed in a sliding window fashion with striding, and what the striding factor was (dense sliding, or every n samples, or windows with 50% overlap)? - The comparison evaluation to other methods (Table 3) does not feature results for ED-TCN, but authors could include this with little effort, by removing the encoder-to-decoder skip connections from the ED-TCN-Link network and re-training. Cons: - The ED-TCN-Link network architecture is an hourglass network with skip connections from the down-sampling to the up-sampling layers. This idea is not novel though, and the resulting architecture is in principle identical to a 1D U-Net with summation instead of concatenation of feature maps in the up-sampling path [1]. Could the authors please discuss this similarity and explain whether and in which way their architecture is different from a 1D U-Net? - Three different kinematics representations were tested (All/Slave/PVG), as originally proposed by Lea et al. Results confirm the previous finding by Lea et al. that PVG performs better than the other two representations, but no further insight beyond this is won from this experiment. For example, in future work, it could be more interesting to investigate whether more efficient latent representations of ""All"" can be achieved. One interesting direction could be e.g. deep bayesian state space models [2]. - The ED-TCN-ConvLink architecture is similar to ED-TCN-Link, but with convolutional and pooling layers put into the forward links. In the experiments, this architecture almost consistently performs worse than ED-TCN-Link. I can imagine that this is due to the incorporation of pooling (downsampling), and I would recommend trying to leave them out and perform only convolution instead (the link needs to be summed into one higher layer in the up-sampling path though, right after the up-sampling layer, to match resolution). In U-Net and comparable architectures, horizontal links preserve spatial resolution and high-frequency features from the down-sampling path. Maybe the loss in accuracy is due to this loss in resolution. [1] Ronneberger O, Fischer P, Brox T. U-Net: Convolutional Networks for Biomedical Image Segmentation. Miccai. 2015;23441. [2] Karl M., Soelch M., Bayer J., van der Smagt, P., Deep Variational Bayes Filters: Unsupervised Learning of State Space Models From Raw Data, ICLR, 2017, pseudo-url""",3,0 midl19_22_3,"""Summary: This paper discusses an approach to automatically recognise surgical gestures from temporal kinematics data. The authors propose to extend an existing method (Lea et al. 2016) with skip connections and test on a newly available dataset (JIGSAWS). - The authors use a new dataset to test a convolutional neural network approach for action recognition - This work extends a work by Lea et al. 2016 for action recognition by introducing skip connections - The presented results outperform previous work - my understanding of the JIGSAWS dataset is that it also comes with video data. Why hasn't this data been used as additional source of rich information. Only kinematics data has been used for 1D segmentation. - what's the difference between a U-Net (Ronneberger 2015) and the proposed Lea et al. 2016 approach with skip connections? Wouldn't a 1D U-Net be better suited for this job or do the same? - While the method is very straight forward and easy to understand, the paper is difficult to read mainly because of language and grammar shortcomings. - The authors describe the problem as a dictionary recognition problem in 1D. My feeling is that methods from the domain of natural language processing would be promising for the targeted problem (1D high dimensional features, dictionaries, grammars, etc.). Using a conventional convolutional segmentation approach method might not be ideal for this class of problems. - there is a lot of white space, especially around the figures that could have been used more efficiently. minors: - abstract: "" Automatic segmentation of continuous scene"" -> scenes - abstract: ""it is important that they understand the scene they are in, which imply the need to recognize action"". This sentence does not make any sense. understanding a scene does not imply understanding actions. Understanding actions usually requires understanding scenes... - abstract: ""specifically 1D Convolutional layer"" -> a layer? layers? - p1: ""it is crucial to be able to segment the scene in smaller segment"" ?? segments ? - at this point I gave up to suggest detailed language improvements. It feels like every sentence is grammatically wrong in the abstract and large parts of the remaining paper. - p3: 'an high level representation' -> 'a high level representation' - p4: ""ski connections"" -> ""skip connections"" - p7: ""This results could be because"" these results, this result...? - p7: ""which means their are not many"" -> their -> there - p.2: ""Unsupervised methods are what everyone is aiming for, however for now, the results are not as high as with supervised methods."" -- what does this sentence contribute to the paper? - p10: ""Hochschreiter S. and Schmidhuber S. ..."" reference format is inconsistent with other references. Overall, this paper describes a trivial extension of an existing approach. The paper seems to have been written in a rush and would need major revision, both regarding the presentation as well as methodologically. I would suggest to condense this paper and submit to ISBI as a 1-page abstract. """,1,0 midl19_23_1,"""- The authors transfer two recently published tools, MUNIT for data augmentation and Criss-Cross attention for semantic segmentation, from pure computer vision field to the medical imaging. Specifically they employ the tools to improve the pathological lung segmentation employing CXRs images. - The authors present an extensive evaluation employing public available datasets, in which the lung lesions are mild, and their own dataset with severe lesions. The obtained results for public available datasets are similar to the state-of-the-art approaches, and particularly better for highly damaged lungs. - Employment of generative models as MUNIT for data augmentation is quite new in the medical imaging field. - Both, Criss-cross attention and specially data augmentation with generative models could be easily extended and useful in similar segmentation problems. Particularly, if the authors make publicly available a clean and robust code after publication. - As I mentioned above, the employment of two novel techniques from the computer vision field must be welcome. However, the method has not been adapted for the presented problem. It seems like the authors have put two pieces of code together to subsequently employ their images. Computer Vision and medical images segmentation problems have a lot in common but as well several differences that should be reflected within the models. This issue could have been evaluated by publishing the code without the acceptance condition imposed by the authors. Criss-Cross Attention based Network for Lung Segmentation: - It is quite difficult to understand the section. The text and the figure 2 are insufficient (kind of disconnected even) in order to provide an adequate explanation for the work purpose, actually, it was a must for me to read the full original paper in order to understand the model. Besides, the figure is mostly the same that the one employed in the original work but is not explicitly cited. In this section, The weight decay and the batch size have changed with respect to the original work, why? Difference between images? Convergence issues? Etc. Data Augmentation via Abnormal Chest X-Ray Pairs Construction: - Again, the employment of MUNIT for data augmentation is a nice approach but, is there a way to guarantee the realistic deformation of all lungs? How much does it care? Datasets: - Is there a single CXR per subject for all datasets? - There is a lack on the datasets description: Voxel size/resolution, etc. Quantitative Results: - To perform a fairer and more interesting comparison, it would be recommendable to employ an adapted version of the U-Net with the criss-cross attention modules in addition to the classic model. - In general, there are conclusions, subject to discussion, mixed with the results, e.g: This demonstrates that the proposed XLSor based on the criss-cross, suggesting the effectiveness of our data augmentation technique for lung segmentation..., etc. - The results for severe lesions are better for the proposed model. The overlapping measures reach, in average, the results obtained for mild lesions but, surprisingly, the AVD is much smaller for the difficult cases, how do you justify this? I suppose that the lung shape is much more complex for your dataset, so similar results between mild and severe lesion would be already a big achievement but such favorable differences are weird and point to an ad-hoc model. Please explain it. - The AVD has not units. """,3,1 midl19_23_2,"""The paper is very well written and uses extensive experiments to back up the claims made in the paper. The main contribution of this work is a deep learning multi-organ segmentation approach for segmenting abnormal chest x-ray images. The authors address the problem of multi-organ segmentation in the scenario where expert segmented datasets for abnormal cases are generally not available. The authors combine criss-cross attention networks (that provide computational speedup benefits compared with the standard attention methods) with multi-model unsupervised translation method to generate virtual abnormal C-xray datasets using the expert-segmented normal C-xray images. The authors provide comparison of their method to Unet. Ablation tests showing the benefits of the criss-cross attention and the data augmentation are also provided. While the results are very promising, the approach itself is somewhat incremental, making use of existing methods, with the exception of application of image translation using MUNIT in a new way to generate abnormal images. Also, MUNIT is an approach to model multi-modes in the data arising from different classes. Its unclear why this is the most suited approach for this work -- is this not a bit of an overkill? There aren't really that many stylistic variations when translating from normal to abnormal Chest X-rays. Perhaps including a very brief discussion on why such an approach was chosen would be helpful. """,3,1 midl19_23_3,"""This paper presented propose a lung segmentation framework for chest X-rays, including a criss-cross attention based segmentation network and a radiorealistic chest X-ray image synthesis process for data augmentation. Experiments were performed on multiples datasets. The proposed method sounds reasonable, and the manuscript is easy to follows. 1) The main concern is that the experimental results seem to be not strong enough. For instance, the authors simply compared their method with U-Net, while there are many other deep learning methods for segmentation. 2) Besides, there is no comparison between the proposed method and the method that does not use the attention module. 3) In addition, as shown in Table 2, the U-Net_A4 achieves better results than U-Net_R and U-Net_R+A3, which suggesting that using only the constructed images is better than those real images. No explanation is given in the manuscript. """,3,1 midl19_24_1,"""Training data for medical imaging tasks is not easy to obtain. Using synthetic data is helpful, yet transferring networks trained on synthetic data to real world applications is challenging. This paper tried to solve this problem by Domain Randomization. The authors proposed two kinds of Domain Randomization. 1. varying the intensity transfer function and 2) adding collimation to the 2D projections. Both of the two kinds of randomization are designed based on physical model. The robustness is improved after using the proposed method. Major: There is only qualitative comparison. It would be more helpful if the paper contains another evaluation that is quantitative. Minor: It would be helpful if the authors can provide evaluation for each of the two kinds of randomization. The paper only shows the improvement after applying both of them. """,3,1 midl19_24_2,"""This work addresses the problem of transfer learning and proposes a method for improving the robustness of deep neural networks when trained only on synthetic data. Traditional transfer learning (from synthetic to clinical data) is limited by the quality of the simulations. To tackle this problem, the authors propose to use Domain Randomization in the context of cardiac image registration (X-Ray, DRR, CT). The technique has already been successfully applied for autonomous driving and robots. Here, the authors modify medical specific parameters, such as Hounsfield units, to realize Domain Randomization (and this should be mentioned in the introduction as a contribution instead of only applying). Furthermore, it is great that the authors not only the method quantitatively on a synthetic dataset, but also qualitatively on a clinical dataset. I believe the work has great potential but needs some major revision. Especially the description of the method and the evaluation/experiments has room for improvement. For example, it is not clear how the CNN is defined. How does the reward come into play here? Is the ground truth transformation simply used for the computation of the loss function or in another way? Furthermore, is the definition of experiments and the evaluation the most meaningful choice? In particular: - Definition of CNN model. It is understandable that the network is not the central element of this paper. However, the is important for reproducibility and should be included - In section 2.1 many arguments are repeated from the introduction and do not really contribute to the explanation of the method. - Why is the training data order of importance? You write ""If the same weights are used for initialization, but the network sees the training data in a different order, the optimization can take different steps and can end up in different local minima,"". Yes, the optimization is not deterministic and will take different steps. But especially with a synthetic dataset a balanced distribution should be possible and with random subsampling and stochastic optimization it should not be a major problem Also, in Table 1, I dont see a major deviation. What mm accuracy is needed for the application? - A more interesting experiment would be how the range of parameters affect the transfer learning. Do you confuse the network at some point, if the examples become to unrealistic? The range for the HU etc. are not well explained / investigated. What is the impact of the variation parameters? - How would Domain Randomization compare to classical augmentation? E.g. creating the DRR and simply modifying the contrast? - The registration accuracy measured by the points of a 3D landmark at the center of the LV model (in mm). If your rewards for action correspond to the transformation parameters, wouldnt it be more interesting to look at the error in terms of translation, rotation etc? Minor comments: - ""data shuffing (D1) and weight initialization (W1)"" would be helpful in Table 1 description - what does ""synthetically generated data of 1711 CT volumes of 799 patients"" mean? Are the patients real and the CT is estimated from it? Or do you have virtual patients and for every patient you created at least 2 CT volumes? I highly encourage the authors to address upper points because I believe the work has great potential! """,2,1 midl19_24_3,"""A domain randomization method is proposed to improve the robustness and transfer of synthetic data to a target domain. The method is applied to 3D/2D cardiac model-to-xray-registration. The results show that domain randomization resulted in more consistent transfer to the target domain. The method does not require any data from the target domain which is interesting. The method could be applicable to other medical imaging applications where training data is not available. The paper is well written and fairly easy to follow. The major limitation is no quantitative evaluation of the method was performed. Abstract states the model was trained fully on synthetic data from 1711 CT volumes. This statement is not clear, the 1711 CT volumes are not synthetic, but the X-ray images generated from the real CT volumes are synthetic, correct? What rewards is the agent learning for registration? Is it just translation in two directions? Rotation? This information should be included. The evaluation metrics reported in all tables/figures should be defined. What is Deviation e_f (mm) reported in figure 3, 4, and 5? What is being reported in Table 1, is it also deviation e_f? How was distance measured for evaluation? Was just the center distance measured? The distance should be measured for all points on the surface to account cases the have incorrect rotation. It would be interesting to evaluate the effects of each domain randomization on its own, i.e., just intensity mapping and just collimation. Acronyms should only be defined upon first use and then the acronym should be used throughout remainder of paper, e.g., first use: digitally reconstructed radiograph (DRR), all subsequent uses: DRR""",3,1 midl19_25_1,"""The manuscript is mostly well written. It is original in the sense of showing how far one can get with an established cycle-GAN approach and model-based training data to segment simple shaped objects (glomeruli) which differ in number of objects, size and shape from other repeated simple objects (nuclei, tubuli). The claim that the method is very robust to the object parameters and that ""the shapes of the simulated objects does not have a major impact on the final segmentation performance"" is not supported by sufficient evidence. The authors should include the statistics of the ground truth annotations and the predicted segmentations when assuming circles and when assuming ellipsoids. Only then readers might be convinced that these claims hold. Doubts stem from the observation that cycle-GAN will synthesize any differences in the distributions to please the discriminator. The F1 scores of the shown examples in Fig.4 should be stated, such that readers can judge if these are representative examples or not. Results from ME and MC for the same image and for different stains should be shown to be able to appreciate their performance differences. Contour overlays of the ground truth and predicted glomeruli segmentation on the image will save space and enable readers to better judge segmentation accuracy. The mean F1 scores for the supervised method should be included in the text. For claiming ""better performance"" a statistical significance test should be performed. Gadermayr 2017 was used as baseline supervised method. Gadermayr 2017) achieved very good results (F1 0.91 for CN2 method) when trained only on PAS stain on 18 WSIs. How was this method trained for the different stains for this dataset (3 stains, 6 images each) when using 1, 2, 4, 8 WSIs (Fig. 3)? It seems Gadermayr 2018a should be used as baseline supervised method, as it can cope with different stains (Dice 0.81-0.86)? The claim that the method can easily be adapted to other applications by changing the model should be tuned down. Most annotation problems would require quite complex size and shape models and might also need an appearance model if similarly sized and shaped objects are present. Minor: Please clarify what is changing in the repeated experiments, e.g. new simulated annotations or different random selection of the same dataset? ...where nuclei cannot be clearly detected (Fig. 4, third column)... Should this refer to second column? """,3,1 midl19_25_2,"""The submission proposes to combine a simple generative model of segmentation mask with GAN-based image to image translation in order to learn how to segment glomeruli in digital pathology slides. Using information about the distribution of shape, size and number of objects in a segmentation mask, the method can be trained without supervision to achieve performance comparable to a fully supervised method. Although prior knowledge is needed about the distribution of objects, it seems to be enough with rough estimates based on visual inspection. Overall, the presentation is clear and the experiments nicely illustrate the potential, as well as some limits, of the suggested approach. If the approach generalizes well, this could become a valuable tool in many applications due to the ease of generating synthetic segmentation masks. The method is validated on a relatively small dataset (9 images in total?). It is not clear from this scale that performance estimates are reliable. This should be considered in future work. I would also like to see if there is a difference between dyes. A problem for GANs is differences in label distribution. If the generated segmentation masks contains significantly less/more glomeruli than the images, it is likely that performance will degrade. The experiment with circle/elipse shape indicates that capturing the exact shape is not very important. I would like to have seen an experiment investigating the importance of estimating the other parameters (shape, number). It is not clear if the visual assessment of parameters is only based on training images or if test images are also included. I am missing some information about the workload of the visual assessment. F.ex is it much less than providing rough segmentations by clicking the center of glomeruli? The plots on the left in Figure 2 are difficult to view. I suggest you try a different color, linestyle, thickness, ... """,4,1 midl19_25_3,"""In this paper, the authors use the well-known cycle GAN model to segment WSI (whole slide images) in an unsupervised fashion. They design several annotation models, that mimic the real label images. The framework is evaluated on WSI from renal pathology and against a fully supervised scenario. This paper shows a valuable application of cycle GAN to histological image segmentation. The authors have published a paper at MICCAI'18 on the same subject, what is the added value in this submission? To better assess the contribution, it would be also interesting to specify if the cycle GAN has been used in a segmentation setting in other medical imaging cases. Some questions: - is there any preprocessing on the image? - ""For each stain individually..."": it is not clear what the authors meant in this sentence. - rotation parameter alpha is drawn in [0, 2pi]. Given the symmetry of the elliptic shape, an interval of [0, pi] should be sufficient, or am I missing something? - does the color difference between first row of images a,b and c,d account for anything? typos: with the a, where used Figure 4: caption ""ettings"" and also 2nd row of images: twice (c) The annotation models contain many parameters, which are tuned ""visually"". Influence of these parameters could be assessed. How interesting would it be to implement an iterative process that would alternate between segmentation and parameter update? I could not find the exact number of patches on which the framework is assessed. In order to fairly compare the proposed approach to fully supervised FCN, is the test set the same in both cases?""",3,1 midl19_26_1,"""Summary: This paper presents a method for conditional image generation based on the Generative Adversarial Network (GAN) framework. In particular, the authors extend the Pix2Pix model by including convolutional capsule networks (CapsNets) in the generator network, and called it CapsPix2Pix. They used Pix2Pix, CapsPix2Pix and a physics-based model to generate images of two datasets conditioned on their segmentation labels that look indistinguishable from the real images. Subsequently, they trained a series of UNet segmentation networks on several real and generated images, and compared their performance on a common test set. Claims of the paper: 1. It is possible to train a successful conditional GAN with a CapsNet-based generator. 2. Pretraining the segmentation network (downstream task for evaluation) with generated images from CapsPix2Pix improves segmentation performance compared to no pretraining. 3. Training the segmentation network from scratch with generated images from CapsPix2Pix improves segmentation performance compared to training with generated images from Pix2Pix. 4. CapsPix2Pix generates a large variation of images compared to Pix2Pix. Pros: * This is the first paper demonstrating that it is possible to perform conditional image generation using convolutional capsule networks in the generator of a GAN. They managed to generate 256x256 grayscale images. * The authors write an extensive description of hyper-parameter values and implementation details. Furthermore, they promise to publish their code and data very soon. Doing so will definitely help other researchers to adopt GANs and CapsNets in the future. * The paper is well written and easy to follow in general. * The authors include plenty of images that provide context and help the reader to understand the methods and results. Cons: * There is limited methodological novelty in this paper. The authors took an existing network architecture (SegCaps from LaLonde and Bagci 2018) and used it as a generator model in an existing conditional GAN framework (Pix2Pix from Isola et al. 2017). Notice that other authors have used CapsNets in the discriminator before, but not in the generator. * A significant part of the paper is devoted to explaining preexisting ideas such as GANs, CapsNet and dynamic routing. * There is limited validation regarding the application presented by the authors. In particular, I found that claims (2), (3) and (4) are not sufficiently supported by the evidence shown in the paper: * (2): when comparing pretraining UNet with CapsPix2Pix versus no pretraining it, they show a relative improvement of 0.76% only (0.6876 vs 0.6824 Dice), less than 1% difference. A test of statistical significance would be required to justify this claim (see next bullet point for more on statistical tests performed in this paper). At most, it could be said that both techniques achieve similar performance. Furthermore, figures A3 and A4 show indistinguishable performance at convergence. * (3): similarly as with (2), a small improvement between techniques is reported, inconclusive without a significance test. * (4): this claim is based on Figure A6 where only 1 example is provided. Since there is no page limit on the Appendix, more examples could be shown. Crucially, these examples should not be cherry-picked but selected at random (the authors do not mention how they chose the reported example). * Regarding statistical significance, the authors perform T-tests and provide p values. However, variation between performance metrics should not be measured across test samples (Table A1). Instead, the authors should repeat the training of UNet networks multiple times with different weight initialization, obtaining a series of performance measurements where the T-test is performed. For example, lets say the number of repetitions is 5, and we are interested in comparing PBAM-SSM with pix2pix-AR (first and second entries of Table 1). Then, 10 UNets should be trained, i.e. 5 networks for the first method and another 5 for the next, obtaining 2 series of 5 performance metrics (5 Dice scores per method, each one the average across test samples). Finally, significance would be studied by comparing these two populations with a T-test. * The qualitative results and analysis are difficult to follow given the variety of datasets, methods and metrics. A few changes could help the reader understand the paper faster: * Figure A1 could be part of the Dataset section. * Include standard deviations in the table. * Be more explicit with Table 1 (use monospace font for better viewing): +--------+-------------+------------------+------+-----+----+ | Labels | Images | Pretrained | Dice | ROC | PR | +--------+-------------+------------------+------+-----+----+ | SSM | PBAM | No | | | | +--------+-------------+------------------+------+-----+----+ | SSM | Pix2Pix | No | | | | +--------+-------------+------------------+------+-----+----+ | SSM | CapsPix2Pix | No | | | | +--------+-------------+------------------+------+-----+----+ | Real | Real | No | | | | +--------+-------------+------------------+------+-----+----+ | Real | Pix2Pix | No | | | | +--------+-------------+------------------+------+-----+----+ | Real | CapsPix2Pix | No | | | | +--------+-------------+------------------+------+-----+----+ | Real | Real | Real-Pix2Pix | | | | +--------+-------------+------------------+------+-----+----+ | Real | Real | SSM-Pix2Pix | | | | +--------+-------------+------------------+------+-----+----+ | Real | Real | Real-CapsPix2Pix | | | | +--------+-------------+------------------+------+-----+----+ | Real | Real | SSM-CapsPix2Pix | | | | +--------+-------------+------------------+------+-----+----+ | SSM(*) | Pix2Pix | No | | | | +--------+-------------+------------------+------+-----+----+ | SSM(*) | CapsPix2Pix | No | | | | +--------+-------------+------------------+------+-----+----+ * According to Table 1, among the first 6 entries, it seems that training with real data always produces better performance than training with generated images. What are the consequences of such evidence? * The idea of pretraining the segmentation model with generated data appears without justification. Is there any hypothesis or intuition explaining why this could improve the performance? * The authors compare CapsPix2Pix and Pix2Pix in terms of the number of trainable parameters. However, CapsNets are historically slow and memory intensive. How do these two models compare in terms of GPU memory footprint (weights and activations) and training time (wall clock)? * How do you ensure that the generator does not generate images that look realistic to the discriminator but are not biologically plausible? What are the consequences of this potential behavior in the biomedical setting? * The caption of Figure 1 should say CapsPix2Pix generator architecture since the discriminator is not shown. * Is there any justification why the segmentation UNets are trained with 64x64 images whereas the generative models produce 256x256 images? My acceptance rating is conditional to performing proper statistical analysis or a relaxation of the claims. EDIT-UPDATE: the authors have addressed all my concerns in their rebuttal, therefore, I confirm the ""accept"" rating. """,3,1 midl19_26_2,"""This paper presents a convolutional capsule-based generative adversarial network, similar to pix2pix, that is applied to a simulated and a real microscopy dataset. Adding the synthetic examples generated by the model to train a segmentation network improves the performance of the segmentation model, with a performance improvement comparable to or better than that of the pix2pix network. I liked reading the paper. The various components are explained well and the approach is relatively easy to follow. The addition of capsules to the pix2pix seems to be a novel approach. The experiments look fairly solid (although I am not sure of the number of repetitions, see below). Section 2 notes that LaLonde and Bagci (2018) restricted the dynamic routing to small spatial neighbourhoods, but that the proposed method uses full dynamic routing instead. Does this restrict the size of the images that can be processed by the network? To some extent, the proposed synthesis method provides a fancy way to do data augmentation or to add regularisation to the model. It might have been interesting to include one or more of these simpler methods in the comparison in Table 1. The Discussion is relatively brief. Although the authors provide at the features extracted by the networks in Figure 2, there is not much more in the way of analysis or discussion of how the capsule networks are able to outperform the non-capsule baseline. Is it the fact that they are more efficient? (This isn't really measured in the paper, I think.) Is it because they learn fewer redundant features? There are some hints to the answers in the Abstract, but these things are harder to find in the paper itself. It is not clear to me whether the results shown in, for example, Table 1, are based on multiple runs of the algorithms, or whether we are looking at the performance of a single run for each setting. (I obviously hope that we are looking at averages.) Minor point in Section 2.1: ""D is shown both real and synthetic label-latent pairs"". Shouldn't the discriminator also receive the real and synthetic images? """,4,1 midl19_26_3,"""The authors describe their approach CapsPix2Pix for synthesization of medical image data, that can be used as training data for machine learning. They reach state-of-the-art performance while reducing the number of network parameters by factor 7. - The paper is well written and gives a good overview of the issue. - They will release the synthesized dataset and their code to reproduce the results. #openscience - The authors do a good job explaining the background and related work. They provide a nice and clear overview on Capsule Networks. Abstract The authors claim that The field of biomedical imaging, among others, often suffers from a lack of labelled data.. This statement is not totally clear. In which aspects does the field suffer from labeled data? E.g. machine learning for biomedical imaging suffers from a lack of labeled training data...) This is described well in the introduction, but could be made clearer in the abstract. The authors compare features of pix2pix and their CapsPix2Pix approach in Fig. 2. It is not explained, what the presented features are supposed to demonstrate. How are the presented features of pix2pix selected? Please explain this figure better. Introduction A way to resolve this is -> What are other ways to resolve this/ are there other approaches? E.g. how does synthesizing images compare to more traditional data augmentation as e.g. described by Ronneberger et. al.? Background The authors use the value function V described by Isola et al. They chose the weighting parameter =1 instead of =0.1. This choice should be explained! In initial experiments, we found that standard convolutional discriminators (Radford et al., 2015) performed as well as convolutional capsule discriminators, and so opted to use the former. -> The authors should explain, how this was found? Please provide some information on the initial experiments and what makes you confident, that standard convolutional discriminators are sufficient. Methods The role of the latent vector is explained in section 3.2. Please add a reference to Fig. 1 here. (p.6) The authors describe their Discriminator very briefly. As they point out the effect of capsules in the Generator throughout the paper, it would be really interesting on why they chose DCGAN Discriminators. Datasets The description on how the synthetic dataset is created is very sparse. The used methods are not explained or cited. It is not clear how the SSM or the PBAM works. Experiments and Results The authors compare several training datasets for the U-Net in the Quantitative Analysis. Some information is missing here: What was the size of the used training datasets? (Same number of images in all training datasets?) Was some kind of cross validation performed? The test set of 20 images is rather small. How did you make sure, that the images represent the data distribution correctly. Table 1: Please provide the meaning of the Abbreviations in the caption. Figure 3: Please provide more information on what the red arrows are supposed to show/highlight. """,3,1 midl19_27_1,""" - This paper presents a clustering method using deep autoencoder for aortic value shape clustering. It is the first work to identify aortic value prosthesis types using a general representation learning technique. - This work has a remarkable clinical value. Clustering of aortic value prosthesis shapes has a high contribution to personalized medicine. - The entire workflow is quite clear and complete. - The introduction part is a little misleading for me. The authors emphasize that the objective is to cluster the geometric shape of leaflets, and it is hard to represent the shapes in high-dimensional space (last paragraph of introduction). I'm concerned that this would make the readers misunderstand the data are shape-models (point cloud dataset) before the description of dataset in Sec. 2. - One major concern is whether the results are reliable: 1. The experiments shown in Table 1 compare several different network settings. This kind of vertical comparison is insufficient to support the claims made in the study. Please compare to other representation learning methods such as sparse coding (e.g. spherical K-means, dictionary learning), dimension reduction (e.g. PCA, t-sne). 2. This study did not give a gold-standard for shape clustering (though it could be difficult). The experiments measure the recon accuracy. However, recon accuracy highly depends on decoder network. It is not convincing to claim that the clustering is correct since even a noise can be decoded into a normal image. - In the last paragraph of the introduction, authors say 'it is hard to define a feasible metric describing the similarity of the valve shape in general.'. However, authors use Jaccard coef. and Hausdorff distance to measure the recon accuracy between original image and reconstructed image. It is a self-contradictory statement. other comments: - The authors use 2D images to represent leaflet shapes, I'm concerned whether 2D photograph is precise enough. 3D scanner such as CT, MRI, optical scanner could be more suitable for this work? Though this is not the issue to be considered in this work. - The paper is not well organized. Details of training should be more clearly written. The hyper-parameters of autoencoder and the recon decoder should be more clearly stated for reproducibility. All architectures listed in Table 1 should be stated clearly in experiments section not only in method section. """,3,1 midl19_27_2,"""- Authors proposed clustering analysis for approximating shapes of aortic valve prostheses. - Authors proposed a representation learning method in latent space based on autoencoder for shape clustering of aortic valves, instead of pixel-based training. - It seems that the authors proposed a somewhat novel idea for a pragmatic application. - In Results and Discussion sections, the authors provide sufficient validation and discussions via observing the performance change according to the number of clusters or the structure of learning frameworks. - There is an insignificant difference between the performances of comparative methods, which is probably due to the small dataset. In spite of difficulties of acquiring the additional data, it can be argued that the minimum amount of data required for training the suggested learning model should be larger than the current dataset. - In experiments, the comparison was conducted mainly on similar deep learning models. Considering the small dataset mentioned above, it could be enough to construct the conventional feature-based clustering model via extracting the classic shape features, e.g. curvature and convexity, etc. Additional comparison with this conventional feature-based model would be better. - It would be also better to provide the further visual analysis of whether the shapes of same cluster data are actually similar and the shapes of different cluster data are actually different. """,3,1 midl19_27_3,"""- A clustering method based on features from a convolutional autoencoder is proposed to define clusters of similar aortic valve prosthesis shapes. - Interesting application and well described methodology. - It took me a while to realise what kind of imaging was used. This could be more clear from the abstract and/or title. - It would be good to have a more visual representation of the quality of the results, now only figures of the Jaccard coefficient and Hausdorff distance are shown. - Listing the resolution in mm/pixel instead of pixel/mm would be more intuitive.""",3,1 midl19_28_1,"""The submission addresses very relevant problems in CNN based medical image analysis. Computation time and memory constraint are important issues. The proposed framework deals with computation time and memory constraints via sparse image analysis and it furthermore enables to incorporate more context (another important issue in image analysis) in the network via graph CNNs. The results are impressive with quite a big improvement over U-nets, while the proposed architecture has much less parameters. I do have some concerns however regarding the experiment, which may bias the experiment in favor of the proposed approach (see comment below). Nevertheless, I very much appreciate the creativity of this paper and the ambition to solve the above mentioned challenges in medical image analysis. I recommend accept conditional to minor changes/clarifications. I put all my minor and major comments and suggestions in this section. The paper puts a lot of emphasis on context aggregation, however, to me it is not completely clear how this is achieved. The authors mention that they rely on graph CNNs on which they perform pooling (via graph diffusions), but there is no explanation of how this contributes to a higher contextual understanding of the data. I would appreciate it if this were somewhere in the document explained (preferably in the introduction). A related issue is the following. In the conclusion the following is mentioned: we showed that GCNNs can successfully mimic UNet-like encoder-decoder architectures of pooling global context information. I think that this sentence in a way down-plays your own work. I dont think it was the goal to mimic UNets, but I also dont directly see how it mimics the context aggregation features of UNet type architectures other than that there is some global information analysis going on. The UNets are hierarchical in nature (it is in this sense not unique, there are many other multi-scale analysis approaches in MedIA) and I dont think this hierarchical nature is apparent in this paper. As far as I can tell, the method works only on two levels (though quite effectively): at pixel level (structure head) and global level (graph based semantic head). Finally, regarding context aggregation. Could you explain in what way context is exploited. I have a feeling that the semantic head sort of has a function of telling the structure head hey, I am pretty confident that you can correctly predict this class in this region, but I am not going to let you contribute to the segmentation in this region, but Im not fully sure that this is the idea behind this splitting. In a nave way, you could also just let the semantic head spit out confidence scores of 1 for each location and each class (it is then a standard Hough voting). It would be nice if the motivation for the design was better explained in the paper. In the related work section of the introduction the transition to work on graphs comes a bit out of the blue. Up to that point there is no mention that the proposed work relies on graph CNNs. Perhaps the introduction could be improved by mentioning up-front the general idea of the paper? Due to some missing details in the paper on graph CNNs I went to study the references of this paper and found that the embedding of some these references in the introduction is not fully correct. It is suggested that the works of Henaff et al. [H] and Kipf and Welling [KW] aim for local support of spectral filters in response to the work by Bruna et al. [B], which supposedly doesnt have this property. In [B] the filters are indeed localized (localization is obtained through smoothness in the spectral domain) and the main contribution of [H] is not to enable local support (they do rely on results of [B]) but rather to describe theory for constructing graphs when they are not yet defined a priori, and they additionally nicely present/summarize the framework for graph-CNNs. A large part in [KW] is indeed concerned with locality of the spectral filters. Instead of relying on splines (as is done in [B] and [H]), [KW] rely truncated Chebyshev polynomials based on the work of Hammond et al. (2011), and describe clear properties of this approach regarding the support size of the graph filters. There is a serious typo in the first equation. In the definition of the adjacency matrix the 2 should be within the exponential. If it is outside, as it is now, the sigma does not have any effect on the graph other than scaling all weights (this scaling is for example undone in the Laplace operator D^{-1}A and is also undone by simple scaling of the graph convolution kernels). Small suggestion: In the second equation either the or can be omitted (only one parameter is sufficient to balance between the two terms). Start of section 2.2.: Could you explain why you designed the network in such a way (parallel structure and semantic head). In principle you dont need the semantic head for Hough voting, but it does seem to improve the results. Some intuition would be appreciated. On page 5 you describe the pooling of features on a graph. At first it was not apparent to my how this is done, but I believe the approach is a sort of equivalent of average pooling in classical CNNs (except for the down-sampling part, which is not done in this work). Perhaps this link, or some intuition, could be provided in the paper. Regarding the diffusion process I would personally find an intuitive explanation more important that trying to describe the mathematics of it, especially because I have the feeling that the provided matrix L is incorrect (see also your own work Hansen et al. 2018 for definitions of L). The part about the diffusion matrix on page 5 is unclear: You provide a matrix which I believe is usually referred to as transition matrix in Brownian motion processes. This matrix could be used to define a Laplacian operator L=I D^{-1}A, which in turn can be used to describe a diffusion process (pp+L.p). I have several problems with this paragraph: 1. There is probably a typo (a missing identity matrix) and 2. The section does not describe how this matrix is used, 3. nor does it provide intuition (e.g. you pool features by means of graph smoothing, similar to average pooling in standard CNNs). I had to dive into spectral theory on graphs to understand what you meant in this section. In the experimental setup (on page 6) you describe that class weighting was not applied. I think this choice could have a quite severe effect on you experiments. In neither of the networks in this paper you deal with the disbalance of labels. The fact that the U-net underperforms could be due to the lack of balancing the data/losses. See also figure 3, where the small bladder is completely ignored by the U-net. Of course it could also be that your method is more robust against this disbalance (possibly due to the sampling strategy). This would indeed be a good thing, but it is not addressed in the paper. For me this leaves the impression that I cannot really tell if your method is intrinsically better, or that it is just less sensitive to unbalanced data. The sigma parameters is set very small (0.1). This would mean that the Gaussians decay within a pixel distance. Is there any connectivity left then? Or do you use normalized coordinates? Small typo in results section: yields a higher score as all UNet , as->than. Finally some additional questions: Is the computation time indeed reduced compared to e.g. the U-Nets? What kind of graph-CNN is used (you mentioned in the introduction some variations but not which one you actually use, I suppose the same convolution type as the one in Kipf and Welling is used)? """,3,1 midl19_28_2,"""* This paper is clearly written about purpose, methodology, and results. * This paper presents a novel method using the CNN for extracting sampling locations and the patch-based network with GCNN semantic head and CNN structure head (i.e., the proposed sparse structured prediction net), to solve the challenging problem of edge detection of multiple organs from medical images. * From the experiments, the authors showed that the proposed method, which utilizes the proposed network with the sampling points, outperformed conventional FCN (U-net)-based approaches. * In terms of parameters for network training in validation experiments, there are some parameters in which the reasons why the authors set the values are not explained sufficiently. Comments - In terms of the losses for network training, the authors set the control parameters of BCE and DICE losses to 0.001 and 1, respectively. Please describe why the values was set to decrease the effect of BCE on the loss computation. Also, the authors should describe why class weighting was not applied to the class-specific loss. - In Figs. 3 and 4, the qualitative results and the quantitative results should be divided into different figure and table, respectively. Also, I suggest to add the legend of anatomical structures in the figures instead of describing them in the figure titles.""",3,1 midl19_28_3,"""The article proposes a novel structured prediction method for semantic edge detection in CT and X-ray images. A fully convolutional network extracts sparse sample locations from original input images. Sparse prediction network takes patches from these sampling locations as input. This SSPNet contains a CNN path to produce edge features and a GCNN path to weight these patches. Hough voting is used to accumulate the predictions to obtain dense semantic edge map. The authors evaluate the method on two datasets against standard baselines. The results from the experiment indicate a significant improvement over the baseline methods. Pros: 1. A novel deep learning method for pixel level prediction by processing image data on sparse and irregular instead of dense grids. 2. Few sparse samples for structure prediction reduce the memory and time limitations for edge detection tasks in medical images 3. The experimental results evaluate the effectiveness of the work on two datasets. 4. The proposed model has 2.5 times fewer learnable parameters than baseline(UNet-L), yet performs 1 and 1.6 percent better on two datasets. 5. Figure 1. and Figure 2. helps to understand the article better. Minor Comments 1. How does the Fully convolutional CNN influence the prediction of SSPNet? Can similar performance be achieved with fewer samples? Or Can other sample selection mechanism make any difference? 2. Would increasing the training samples or doing data augmentation help the baseline methods (UNet)? """,4,1 midl19_29_1,"""The paper introduces an autoencoder-like network architecture to be used for atlas building/application purposes in a LDDMM setting. Essentially, a deep architecture is defined that: (1) allows to estimate an unbiased atlas/template of an image population during network training in an unsupervised fashion (2) when trained can be used to estimate mappings between formerly unseen images and the atlas To do so, an approximation of the conventional LDDMM atlas building objective is proposed, which is solved by the network/training process presented in the paper. The architecture itself consists of an encoder and a decoder part. The encoder maps an input image to a low-dimensional latent space while the decoder maps a point of the latent space to a deformed version of the atlas that is most similar to the input image. It is important to note that the decoder is composed of three different components (1. latent space to momentum field mapper, 2. EPDiff solver, and 3. atlas image warper) and only the first component is actually learned. Overall, the paper addresses two important problems (diffeomorphic image registration and atlas building) that have not gained much attention by the deep learning community so far. I, therefore, agree with the statement made in the paper that it is the first to introduce a deep learning-based atlas building method using LDDMM (DL-supported LDDMM registration itself was also used by Yang et al./Quicksilver). The approach presented is quite interesting and well in the scope of MIDL as it allows the integration of components of the well-known LDDMM framework (i.e. EPDiff integration) directly into network architectures to facilitate, for example, deep learning-based computational anatomy methods where the use/estimation of diffeomorphic mappings is crucial. Furthermore, I also like the fact that the authors actively support the idea of open and reproducible research by making their source code available on github (I took a look, but did not review the code) including a PyTorch-module for EPDiff. The evaluation presented can be characterized as somewhat preliminary with only limited experiments (only conventional vs. new atlas building methods for affine and non-linear atlas building are compared) and no real quantitative results. However, the main problem I see with this paper is related to its clarity about a key part of the method presented. As I see it, the key part of the paper in terms of novelty is Sec. 2.2, which describes the new atlas building method and how it is solved by using diffeomorphic autoencoders. The introduction of Eq. 8 is easy to follow and the basic description of the autoencoder approach (bottom part of p. 4) is also intelligible. However, at least to me it is somehow unclear how verline{I} (the atlas image) is actually computed during the training process. Is it directly learned on a voxel/pixel basis, generated by applying the estimated transformations to the input images, or ...? Maybe I am missing something but this detail is crucial and needs to be added to the paper or clarified if present. I also recommend to describe the network architecture in more detail in Sec. 2.2 as it is hard to understand which parts of the decoder are actually learned and which are static without referring to Sec. 3. This could, for example, be done by moving parts of Sec. 3.2/3.3 to this Section and by improving Fig. 1. To sum up, I like the paper and I think it should be presented at MIDL, but the part describing the training-based atlas building should be revised prior to publication. Pros: - LDDMM-based deep learning approach for image registration - Atlas building problem solved by training an autoencoder network - PyTorch code publicly available Cons: - Description of the novel parts of the method (partially) unsatisfactory - Preliminary evaluation""",2,0 midl19_29_2,"""This paper presents a deep learning based approach to unbiased atlas construction based on LDDMM, which directly learns a diffeomorphic atlas deformation predictor from a set of images (alongside the deformable template) instead of regressing pre-computed momenta fields as in Yang et al. (2017). For this, the authors present a deep learning approach whose training replicates the unbiased atlas construction approach of Joshi et al. (2004). The maths are sound and the article is written well. The provided open source implementations are a good reference for other researchers. My main criticism is regarding the motivation of the approach. Despite the expectation set by title and abstract, the integration of atlas-building with deep learning does not actually produce a machine learning model for atlas formation. At the face of it, the authors utilise deep learning methodology to minimize an objective function for atlas creation during the training procedure. The learned model cannot be applied to create an (unbiased!) atlas from new images, though this is suggested by at least the title of this paper. What is learned, however, is a predictor of the initial momenta that maps an image to the deformable template derived during training, but the authors do not motivate such use and do not evaluate the performance of this template registration for images not used during training. With focus on the latter, the learned model should be directly compared to Quicksilver from Yang et al. (2017). In contrast to Yang et al. (2017), the proposed method does not require momenta that have been pre-computed by another algorithm (e.g., conventional LDDMM). In the discussion of closely related work, the authors do not discuss the method of Yang et al. (2017), but only in the conclusion draw a direct comparison to that method. Instead, the authors could motivate their approach through computational anatomy, where after training their method provides a deformable template that is representable for a given population, and a model that can predict the momenta of the diffeomorphisms that deform this template to new study images. In fact, the authors point this out in the conclusion: Integration of deep learning into the atlas creation methodology promises to enable creative new approaches to statistical shape analysis in neuroimaging and other fields. I would recommend reformulating abstract and introduction to motivate their approach in this context right from the beginning. Besides this critique, this paper is in my opinion of interest to the conference attendees and may spark some useful discussions. Following are minor remarks. The authors write in the abstract that the encoder network maps an image to a transformation and the decoder interpolates a deformable template. I disagree with these statements. Both encoder and decoder together map an image to a transformation, not only the encoder. The authors themselves write later in the method description notice that a diffeomorphic autoencoder amounts to a regular image encoder along with a decoder that maps from the latent space to a momentum vector field that is integrated via EPDiff to produce a diffeomorphism. This contradicts the statements made in the abstract. Why have the authors only used 25 out of 990 available brain images for evaluation?""",3,0 midl19_29_3,"""The authors present a method for constructing LDDMM atlases by combining convolutional neural networks and LDDMM using the momentum parametrisation. Whilst previous works have shown how to combine LDDMM registration and deep neural networks (Yang et al 2017, ""Quicksilver""), to my knowledge this is the first paper to demonstrate using this method to construct an atlas. The solution presented is elegant and has nice theoretical guarantees on the smoothness of the registration as a result of using the LDDMM framework. This is therefore a very important contribution to the literature, and may be of great practical importance. Overall I think the ideas in this paper give it the potential to be amongst the most interesting at the conference. In general the paper is very well written and clear. It has well-chosen comparisons with alternative methods that demonstrate convincingly that the proposed method achieves results that are as least as good as the conventional LDDMM method (with the important caveat highlighted below), whilst requiring just a single forward pass of the model at test time. I am also very pleased that the authors have made their code publicly available, and that they have separated this into a re-usable library and code to reproduce the specific experiments detailed in the paper. Overall I feel this is a novel and important methodological contribution but I have some serious concerns that would need to be addressed before the paper can be accepted. I think it is plausible that this could be achieved in the brief rebuttal period, and if so I would be happy to recommend this paper for acceptance. I have two very serious concerns about this paper and a number of other suggestions for improvement. Major Concern 1: Lack of Results Demonstrating Generalisation to Novel Images ---------------------------------------------------------------------------------------------------------------- In the discussion of the dataset, no mention is made of a held-out test set used to evaluate the model. The only numerical results in the paper are in Figure 2, which relate to the value of the objective function *during training*. I am therefore lead to conclude that the images presented in Figures 3 and 4 may well also come from the set of 25 images used for training, as there is no indication otherwise (there is a possibility that this is a misunderstanding). It is unreasonable to evaluate the performance of the model on the dataset that it was trained on, as the neural network may have overfit to the very small number of cases in the training set and the momentum fields it produces for novel may not be meaningful. The entire point of the diffeomorphic autoencoder model is that it can rapidly register novel images to the learnt atlas. Therefore it is of utmost importance that the paper include numerical results demonstrating the performance of the model on unseen images to show that the model has not overfit, and that all figures showing registrations are clearly from images that were not used to train the model so that they reflect the expected quality of the registrations on novel images. I believe that the paper cannot be published without this, however it also seems likely that the authors should be able to rectify this issue quite quickly, especially as there is plenty of unseen data available in the OASIS dataset. For this reason I am advising ""reject"" at this point in time, however I hope that I will be able to change this in the future if satisfactory changes are made. Major Concern 2: Insufficient Discussion Of Relationship to Previous Work and Omitted Citations ------------------------------------------------------------------------------------------------------------------------------------- The second serious concern I have is that whilst I believe there is considerable novelty in the proposed method the authors need to take far greater care to highlight their contributions relative to existing work, including some that is not cited in the submitted manuscript. Firstly, the authors should discuss the relationship between their paper and Yang et al. 2017, (""Quicksilver..."") in greater depth. There are important similarities between that paper and the submitted manuscript that are not discussed. Yang et al also use a neural network to predict the momentum parametrisation of a geodesic shooting LDDMM method directly from the input image, but this is not sufficiently acknowledged in the literature review, where this is presented as being entirely novel. From my understanding the key differences between Yang et al. 2017 are that a) Yang et al. rely on precomputed momentum estimates to supervise training of their CNN model but in the submitted manuscript they actually train their model by backpropagating through the differentiable EPDiff method and directly optimise the registration loss function b) The submitted manuscript also learns an atlas along with the momentum encoder whereas Yang et al. are only concerned with registering a single moving image to a single, known target image. These are both very important contributions but I would very much like to see the differences more clearly delineated in the manuscript. Furthermore, and more importantly, the authors do not discuss the relationship to the following paper: Unsupervised Learning for Fast Probabilistic Diffeomorphic Registration Adrian V. Dalca, Guha Balakrishnan, John Guttag, and Mert R. Sabuncu MICCAI 2018, pages 729-738 pseudo-url or pseudo-url This work is very close to the current paper in that both use neural networks to speed up diffeomorphic registration, though Dalca et al. do not include atlas learning as part of their framework. Furthermore there appear to be differences in the way that the two papers parametrise the diffeomorphic deformation and implement the solution of the resulting differential equation but unfortunately my knowledge of the underlying mathematics here is not sufficient to comment on this with authority in limited time. What is clear however is that the current paper needs to discuss its relationship to Dalca et al. in some technical detail. Assorted Minor Comments and Suggestions: -------------------------------------------------------------- Now on to some other suggestions that would improve the paper but should not prevent acceptance. To start with, a very easy fix - the Figure references have somehow become mixed up. The text frequently refers to Figure 3.2 and 3.3 but there are no such figures! In my opinion there is room for improvement in the schematic diagram in Figure 1. It would be clearer if the encoder network were explicitly drawn and the process by which the atlas is involved in the learning process made clearer. It is important that this figure is clear to give readers the best chance of understanding the novel and complex method quickly. Another concern is that the authors used only a very very small subset of the available OASIS dataset -- 25 out of nearly 2000 volumes -- to construct their atlas, but no justification for this is offered. It seems like their proposed method should be trivially scalable to any number of volumes, so why not use them? It occurs to me that this may be because the authors wanted a fair comparison with the standard LDDMM technique, and it would take a long time to use the standard LDDMM method on nearly 2000 images. This would be a good justification but the authors should state it explicitly. The authors claim that they ""did not observe need for [...] heuristics such as batch normalization"" (section 3.2, first paragraph). However using batch normalisation (or occasionally other sorts of normalisation) has become a de facto standard in neural networks because it is consistently observed to speed up training considerably and also improve generalisation performance. I would suggest the authors try using batch norm in future experiments, unless they are using very small batch sizes. The authors claim that one potential reason that the proposed neural network method may reach a lower value of the loss function on the training data than the standard LDDMM method is that mini-batch updates can be used in the optimisation process (section 3.2, second parapgraph). It is difficult to assess this however, as the authors do not state what size of minibatch they used in their experiments. The primary justification for using the Diffeomorphic Autoencoder to predict the momentum initialisation for EPDiff is that it should be considerably faster than having to perform an iterative optimisation method to register a new image to the atlas. This is alluded to in section 2.2, but there are no results that actually investigate this. The paper would be far more compelling if results were given for how much faster a novel image can be registered to the atlas using the diffeomorphic auto-encoder versus using the standard auto-encoder. To be clear: I expect that their method is much faster, it has just not been demonstrated. Without this, the authors have not really demonstrated that their method has any advantage over the standard method. In the explanation of the momentum encoder model, it is quite unclear how the 64D latent code is transformed into the momentum vector field. The paper simply states the latent code is ""followed by the output vector field of the network of size 3 83 118 110"". More detail is needed here. Are there some upsampling or transposed convolutional layers here or just a fully connected layer followed by a reshaping? I briefly looked at the source code but was not immediately able to figure it out. The reader should be able to understand this without looking at the source code. This leads me to a comment on the encoder model. It seems to me that the purpose of including the bottleneck (the 64D layer) in the model is dubious. On the one hand, it allows you do some nice interpolations in the latent space as shown in Figure 4, but is this really practically very useful? Maybe it is if you are looking to do certain types of shape modelling, but this is only briefly alluded to in the conclusion and not in much detail. It also provides a degree of regularisation, but it's not clear that this is necessary. On the other hand, it will likely reduce the ability of the model to match fine details of the two images. Why not instead use a U-Net-like model (or any image-to-image model without a bottleneck) to output the momentum vector field directly? This would enable the network to consider both global and local information in the input image when creating the momentum image, and would therefore probably be more able to match fine details and give a better registration. Dalca et al (see above) use this approach. The paper would benefit from some justification of the authors' choice here, and the authors may like to consider this carefully in future work. """,2,0 midl19_30_1,"""The paper presents a semi-automated data generation pipeline and a deep learning (DL) framework to segment potentially cancerous areas in prostate biopsies. Six different models were trained using needle biopsies and generated prostatectomy data. The proposed framework has been validated on biopsy data. This is an interesting work which fits well to the scope of the conference. The paper presents validation of a DL framework which was originally introduced at [Pinchaud and Hedlund, 2018]. The figures are clear, and the references seem adequate. The paper is well structured, but I found it difficult to follow as many technical details regarding the data generation are missing as explained below. Also, the presented framework has not been compared to any baseline method. Suggestions for revision 1. In Section 4.1, it is not clear how WOB areas are detected on H&E images. Is this done with manual segmentation? Details about the density estimation filter should be given. Why is it required to distribute the local information evenly within a local neighborhood of the image when applying the density filter? The authors should explain in detail how the heatmaps are generated. 2. In the Using consecutive slices section, which method has been used to register the consecutive H&E stainings to the original H&E images? How does the performance of this registration affect the accuracy of the generated ground truth data? 3. In Section 4.2, the number of pathologists who annotated the biopsy data should be specified rather than defining them as several. How were the ground truths of these pathologists combined? What is the level of expertise of the pathologists who annotated the data and of those used for the comparison in Section 5.4? 4. The performance evaluation study would have been more robust if it had included comparison to other segmentation approaches. """,3,1 midl19_30_2,"""- A relevant topic: I think the idea of using WOB as a biomarker towards the automation is interesting and worth to explore. - The paper is generally well-written and easy to read and understand. They have also provided more details in the appendix which make it easier to cover missing parts from the body of the main paper. I have some suggestion to improve the quality of the paper. I would like to write this part as the suggestions: - Some details seem to be missing (sorry if they are there and I overlooked): For example, what is the final number of samples which is used for training? I can see the final number of 295 images, but I cannot find detailed information about the data division. - Further explanation about the ""years of experience"" of the pathologist is useful in putting things into a perspective and also to factor out the human effects. Did they ask for the 4 or 5th pathologist to regrade? Also, for the manual annotation, who have done it? And do they have any confidence value for this annotator? - Comparison of the method with the modality based approach such as MRI and Ultrasound would be interesting. - The whole paper would benefit from mentioning the clinical goals and values of the research. - It would be interesting in Fig. 3, Sensitivity part, to also look at non-WOB regions sensitivity of P1, P2, P3. - Figure 1, is not more logical to train first on the coarse image 2 mpp, then refocused training on fine grain image 1 mpp? What are the reasons for the current setting? - What is the reported confidence rate of WOB, especially as a biomarker for indication of the acinar prostatic adenocarcinoma? - Analysing and using other methods to control the receptive field is beneficial. - Comparison with other methods which are using other biomarker is missing. """,3,1 midl19_30_3,"""The paper describes a deep learning (DL) framework to segment potentially cancerous areas in prostate biopsies using semi-automatically generated data. To minimize manual effort from pathologists and reduce inter/intra observers' variety, glandular tissue WithOut Basal cells (WOB) class is applied in the semi-automated data generation process. An evaluation on 63 biopsies was conducted and demonstrated the effectiveness generated data for training a DL model. -The paper is well written and easy to understand. -The topic is relevant and interesting. -The proposed method was thoroughly evaluated on clinical biopsy data. - The work utilizes data annotated using different strategies, acquired from different scanners and has different characteristics (e.g. different Gleason scores). I would suggest authors provide a table to clearly summarize all the information about the datasets. - What are the Gleason scores (GS) of the training and testing data? What would be the performance for data with different GS? I would suggest the authors to also provide an analysis on data of different GS individually. - Despite an interesting topic and practical solution, the technical novelty of this work is somewhat limited. """,3,1 midl19_31_1,"""1. The paper attempts to address the classification problem in skin lesions and pneumonia in chest x-rays with a focus on 'paying attention' to the the ROI. It claims that focussing on the ROI of the minor class improves performance in cases of high data imbalance. 2. The idea of forcing the Grad-CAM output to be inline with the bounding boxes is interesting. So is the idea of the 'inner' and 'outer' losses. 3. The variety of experiments performed is extensive, and the improvement in results make a favourable case for the proposed method. 1. Authors' claim of improved performance in imbalanced data when attending to ROIs is backed solely by empirical evidence. At the outset, the improved performance can be attributed to higher loss values for the minority class, induced by the additional supervision in the form of bounding boxes. (since L_a = 0 for majority class which doesn't have bounding boxes, eq. 1?). A more rigorous backing in this regard would be of interest. Otherwise, the novelty of the work is limited. 2. Attention is fully supervised in this case, and hence it should be made explicit that the term 'attention' here is not equivalent to its traditional counterparts in literature [1] [2]. 3. The text seems to under-estimate the effort of requiring an additional annotations, even for the minority class. A dataset with 1 million examples with 10000 minority examples is still an imbalanced dataset. Also, it would be interesting to see if this ratio of this imbalance is crucial. 4. Minor: L_g in text above eq. 1 is undefined. Few errors in text. [1] Oktay et al. 'Attention U-Net: Learning Where to Look for the Pancreas'. In: MIDL 2018. [2] Jetley et al. 'Learn to pay attention'. In: ICLR 2018""",2,1 midl19_31_2,"""A well written paper with a clear motivation, interesting evaluations and good amount of detail. The core idea of the paper is to optimize saliency maps during training in order to guide classification networks to attend to the expected image regions. According to the narrative of the paper, this aims at improving learning from few examples per class (although this should not be limited to class imablance scenarios). Explicitly optimizing to attend to salient regions appears to be the main novelty of the paper. The proposed approach is independent of choice of (deep) classification architecture, which of course is a nice property to have. The authors present a compelling analysis of how inter- and intra-rater variations, as simulated by bounding box tightness variations, affect the approach. The approach essentially solves the problem of having too few examples per class by using denser labels. Here the training of classification models -that rely on image labels- is improved by incorporating bounding boxes. This limits the method's applicability to datasets with bounding box labels, a domain on which object detectors are known to perform very well. One advantage though of the method here is that it allows to relax this requirement, in the sense that it is flexible to still be trained on classification alone for images/classes which are not labelled with bounding boxes. (As a meta comment: similar performance gains can be observed in recent object detectors which are typically trained using bounding boxes but can be improved by training on even denser labels, i.e. pixel-wise segmentation maps) There are a few technical details that remain unmotivated or un-evaluated: Why is ImageNet pretraining required here? Why was the CARE-loss only used to finetune and not during training from the start? What data-augmentation (method 'DA') was employed? The appropriateness of the chosen metrics, recall and mean class accuracy, is not discussed either. The impact of the learned attention/localization on the classification performance was evaluated. It would however also be interesting to evaluate the attention/localization itself in terms of appropriate metrics such as average precision instead of only showing a few test set examples in Fig. 2. There exists a body of literature that employ saliency techniques (in part also building on Grad-CAM) in order to perform localization in the image space. Although these works appear to not explicitly optimize the obtained saliency maps, they could be discussed with the related work section. """,3,1 midl19_31_3,"""- The authors shown a new method to deal with class imbalance by adding a new loss that will force the network activation into a previously labelled ROI. - They make a clever use application of a visualization technique ( Grad-CAM) into the learning process. - They make a pretty good validation on their technique and comparison with other approaches to deal with class imbalance. - It is a well written and well presented paper. - I am not sure the technique should be called attention since it is fully supervised. - The authors claim that their selection of bbox or alpha parameters do not change the final result. This is clearly not the case on the Recall of the pneumonia dataset. I think this is likely because in the skin cancer dataset their CARE method does not make a huge improvement with respect to the other augmentation methods (table 1) since in most of the images are centre around the area of interest anyway (as shown in Fig 2). However, in the pneumonia dataset selecting relevant ROI makes a lot of difference because the lesions are multiple and interspaced around the image. Therefore they will be more affected with by the ROI selected or the weight in the attention loss.""",3,1 midl19_32_1,"""The paper is well and clearly written. It is somehow original since I have not yet seen networks reconstructing voxel-based shapes from landmarks and vice versa from that resolution. The resolution is impressive. The abstract and introduction are well written and motivated. The paper is slightly above the page limit but I think that is adequate. The paper is reproducible, especially since both, code and data is or will be made available. edit February 11th: I changed my review from reject (tending to strong recject) to accept. The authors did a lot of work to actually adress my concerns. My major concerns where adressed and I think the manuscript should be in much better shape now. The things that are still not intuitive to me are: - ""We further clarified that the main clinical application of this method is in mandibular shape reconstruction for surgical planning where the normal pre-morbid mandibular form is unknown."" - Why do you have landmarks available for that task? From the task I would expect a full skull model estimating the mandibular from the full skull. - ""Lastly, readers will be instructed on the fact that shape generation from incomplete observation is, intrinsically, ""an ill-posed problem"" and, theoretically, there cannot be a unique solution, which results in a one-to-many mapping."" - I agree on that, but this paper does not model it as a one to many mapping (like other works do by modeling the posterior distribution). Coming from the shape modeling community the paper did some weird design choices and especially the validation of the approach should be improved. My main criticism is the choice of landmarks as latent representation. This leads to a one-to-many mapping to shapes. The shape modeling community tends to model a posterior distribution in such a case. Usually, the aim is to learn this latent representation and I only see drawbacks in this explicit choice. The task is compared to other approaches fully supervised and therefore the spatial resolution is less impressive than for an unsupervised method learning the latent representation. The statement that this resolution was not reached before should at least be put in context to ""Octree Generating Networks: Efficient Convolutional Architectures for High-Resolution 3D Outputs"" at ICCV 2017 presenting a convolutional decoder reaching a resolution of 512^3. The task of reconstructing a shape from landmarks is well studied e.g. by the modeling of posterior distributions. The weakest part of the paper are the experiments. It is hard to estimate the performance of the approach based on those experiments and since the for me obvious experiments or visualizations (see later) are missing. I honestly expect its performance is pretty bad. I would suggest to further work on the paper and especially improve on the experiments. Here some detailed comments and suggestions: Introduction: - The limitation of classical SSMs are not fair not all models are limited to variation by principal modes - a lot of models allow some additional deformations that are regularized not by the statistics of the training data (e.g. Gaussian processes). - The statement that the mandible is one of the most complicated and variable anatomies of the human body is weak - it is not obvious to me why that is the case. I would also not agree that cars, chairs and tables are well-formed shapes. The challenges are different, chairs or teeth for example have the challenge of adding or removing legs/roots. - The choice of the network architecture looks arbitrary. Choices in the methods part are not motivated and it basically only contains the architecture and the loss functions. The sentence ""we experimented with many deep neural architectures, one of which is depicted in Figure 1"" is perhaps honest but is supporting a trial and error approach instead of a deeper idea behind the architecture. - Section 3.3. is named Experiments but contains a description of the choosen latent space. Since the learning is fully supervised I would expect that to be part of the methods section.. The actual experiments performed are described in the results section. - The results section contains two different tasks, landmark estimation and reconstruction. I think additional structure with titles would improve readability. - Table 1 shows the reconstruction performance given landmarks. The presented values appear extremly high to me. Instead of comparing it to a proper baseline it is compared to the task of segmentation which does not make sense in my eyes. As a simple baseline I would propose to add the thresholded average of the original voxel maps or a reconstruction based on the average landmarks. This would indicate if and how much better the reconstruction is than just taking the average of the data. - The average fiducial-to-surface distance measured was 1.89 mm - this again feels quite big. Other publications working on mandible landmarks show the performance per landmark - this could perhaps be added to Figure 2 (since some landmarks are not well defined like the one on the tip of the teeth in front which is not available in the full dataset. - For the average surface distance 1.2 mm should also be set in the context of the average as prediction. - The shape modelling community came up with some measurements of modeling quality (generalization, specificity, compactness). A full loop would therefore be interesting: New unseen shape -> estimate landmarks -> reconstruct shape. - Figure 3 is missing the landmarks - without the given landmarks those reconstructions don't help in estimating the quality of the reconstruction. - The landmark reconstruction performance of 3.84 again is hard to evaluate without comparison or context. Since no values to compare are given I head to search for one - so I don't know if the comparison is fair, but ""Deep Geodesic Learning for Segmentation and Anatomical Landmarking"" (TMI 2018) estimated landmarks from images with an segmentation before - they reach ~ 1mm. Adding a figure to allow a qualitative estimation of the quality could help here. Again I would propose to add a landmarkwise number for this to Figure 2.""",3,1 midl19_32_2,"""* Deep model to generate high-resolution (140^3) mandible images from the set of surface landmarks (29 landmarks) * It is not clear why f(V) network (auxiliary network) is required in this image generation task. For example, because the input Z is the coordinates of the surface landmarks, the f and g models can be formed as a cycle model for cycle consistency. * More training samples are desirable. Currently the number of training samples is just 87. * It is desirable to show the input landmarks together in Figure 4. It may be better to understand the mapping between the landmarks and output images (mandible shapes) by the g model. * Comparison with the segmentation methods is very confusing. It is recommended to measure the surface distances between the generated model and the surface mesh where the input landmarks are extracted from for the accuracy evaluation in the surface generation. """,2,1 midl19_32_3,""" Summary: Authors present AnatomyGen, a CNN-based approach for mapping from low-dimensional anatomical landmark coordinates to a dense voxel representation and back, via separately trained decoder and encoder networks. The decoder network is made possible by a newly proposed architecture that is based on inception-like transpose convolutional blocks. The paper is written clearly. Methods, materials and validation are of a sufficient quality. There are certain original aspects in this work (latent en-/decoding, inception-based decoder network, latent space interpolation, generalization to previously unseen shapes etc.), but the work may not be as original as authors suggest, since they may not be aware of a very similar work (see Cons), where some of the discussed concepts have already been proposed and explored. - Authors explicitly that the work is not intended for segmentation, but many previous shape modeling works (including SSMs) were used as regularization in segmentation. Authors could comment on how their model could be incorporated into (e.g. deep) segmentation approaches, because I do not see an immediate way to do that without requiring the (precise) image-based localization of mandible landmarks in a test volume. - I would recommend weakening or at least toning down certain ""marketing"" claims like ""3 times finer than the highest resolution ever investigated in the domain of voxel-based shape generation"", or ""the finest resolution ever achieved among voxel-based models in computer graphics"". First, it is not fully clear where this number 3 comes from, and second, the quality of the work speaks for itself. Further, there is always the chance that authors are not aware of every piece of related literature (in all of computer graphics), as it might be the case here. - Authors claim to introduce many concepts for the first time, such as the ""first demonstration that a deep generative architecture can generate high fidelity complex human anatomies in a [...] voxel space [from low-dimensional latents]"". However, I am aware of at least one work where such concepts have been proposed and explored already. CNN-based shape modeling and latent space discovery and was realized for heart ventricle shapes with an auto-encoder, and integrated into Anatomically Constrained Neural Networks (ACNNs) [1]. Their voxel resolution is only sligthly smaller than in this work (120x120x40), with a similar latent dimensionality (64D, here: 3*29=87). Smooth shape interpolation by traversal of the latent space was also demonstrated, and some of their latents also corresponded to reasonable variations in anatomical shape, without being ""restricted"" to statistical modes of variation as discussed here. - Compared to the proposed work, where latents represent clinically relevant mandible landmarks, an auto-encoder approach as in ACNN is more general: relevant landmarks as in the mandible cannot be identified for arbitrary anatomies, and a separate training of decoder and decoder as proposed here crucially depends on a semantically meaningful latent space with a supervised mapping to the dense representation (e.g. hand-labeled landmarks vs. voxel labelmaps). In contrast, ACNN auto-encoders train their encoder and decoder in conjunction. How do authors suggest to apply their approach to anatomies where it is impossible (in terms of feasibility and manual effort) to place a sufficiently large number of unique landmarks on the anatomy (e.g. smooth shapes, such as left ventricle in ACNN)? - Authors suggest that their solution ""is not constrained by statistical modes of variation"", as e.g. by PCA-based SSM methods. While I agree that the linear latent space assumption of PCA is too simplistic and the global effect of PCA latents on the whole shape often undesirable, the ordering of latents according to ""percent of variance explained"" is actually desirable in terms of interpretability. [1] Oktay O, Ferrante E, Kamnitsas K, Heinrich M, Bai W, Caballero J, et al. Anatomically Constrained Neural Networks (ACNNs): Application to Cardiac Image Enhancement and Segmentation. IEEE Trans Med Imaging. 2018;37(2):38495. """,3,1 midl19_33_1,"""The submission suggests an approach for better utilization of the large amount of unlabeled data when applying deep learning methods to digital pathology slides. The suggested approach has two steps (1) train a CNN to segment glomeruli using bounding box segmentations as labels (2) predict segmentations on separate data and use these as labels for training another CNN to segment glomeruli. The problem is highly relevant and I like the the idea of using the distribution of glomeruli shape and size as criteria for filtering segmentation suggestions in Stage 2. I have several issues with the submissions. Overall, I find the presentation confusing and unclear and it is possible that most of the issues arise from the lack of clarity. The first paragraph of section 2.1 is a good example of what I find confusing and unclear. What characteristics are you exploiting? What is your approach very similar to? What are BBs? (Keeping track of custom abbreviations is tricky. I had to go back in the text, because I forgot what it referred to. ) Another example is Figure 5 where missing captions for the subplots makes it impossible to understand without reading the text simultaneously. Additionally, all four plots seem to have their own scaling and extent of the y-axis, making comparisons tricky. I also find it problematic that 4 out of 9 references are to your own work. Are you the only ones working on segmentation in digital pathology? The main result is comparing a CNN trained on 8 images with bounding boxes (IS1) to a pair of CNNs trained on the same 8 images with bounding boxes + 9 images without bounding boxes (IS2). As I understand it, Figure 5 (a) shows F-score for different combinations of number of images and number of annotations per image using images from IS1. The F-score is then calculated on five test images (IS3) with the best results being almost 0.90. You then use this result to select the number of images and number of annotations and retrain the model, but this time you get less 0.80 in F-score. Where does this difference come from? More importantly, if my understanding is correct you have used performance on the test set (IS3) to select a model and then you report performance of this model on IS3. This is methodologically wrong, likely overestimates performance and invalidates your comparison. If we ignore the above problems for a second, we are left with the conclusion that you get almost 0.90 F-score in Stage 0 where you train on IS1, and slightly lower F-score when you train on IS1 + IS2. So it seems best to just train on the bounding boxes. I think you might have used contour segmentations in Stage 0 (otherwise your conclusion does not make sense), but it is not clear to me that this is the case. It is not clear what the contributions are in this submission. In the abstract you promise ""a method for optimizing the overall trade-off between (low) annotation effort and (high) segmentation accuracy"". I do not believe you deliver on this promise. You do not present a method that optimizes this trade-off. You conclude that combining bounding box segmentations with unlabeled data works (almost) as well as using contour segmentations. This just shows that we can reduce annotation effort ""for free"", but does not provide a method for optimizing the trade-off. I also find it unclear exactly how the proposed method is different from the referenced related work. In the introduction of the methods section you state that you adapt the method from Gademayr et al (2019), and promise details in section 3. I see no mention of this in section 3 and it is not clear to me what you have done. I have a similar problem in section 2.1 where you adapt the method in Khoreva et al. (2017), but you do not clearly state what is adapted. It seems to me, that the actual main contribution is the constraints applied in stage 2 (Cues 2), size and shape of glomeruli, yet these are not clearly described. Finally, I do not agree with your statement in the conclusion that you ""work with noisy easy-to-collect labels"". As I understand it, you derive bounding boxes from contour segmentations by fitting the smallest rectangle that contains all of the contour segmentation. This implies that you have weak labels without noise and perfect accuracy and precision (assuming your ground truth segmentations are 100%, which they most certainly are not). What you lack is detail. I suggest you investigate how important the quality of bounding box segmentations is, by either using segmentations from multiple annotators or by adding random shifts and scaling.""",1,1 midl19_33_2,"""This paper addresses the problem of histological image segmentation. As annotations in histological image are costly to obtain, the authors consider weakly supervised as well as unsupervised learning. Their approach is based on Khoreva's approach that leverages bounding boxes instead of precise pixelwise segmentations to feed a segmentation CNN. Their proposal is a cascade of two segmentation models, that make use of 'cues', which are statistical rules applied on the area of the segmentation results. The first model is trained with BB and iteratively improved thanks to the cues. The second model is trained with the results of the first model and iteratively improved thanks to some other cues. Experiments include accuracy results depending on the number of iterations (for both stages), and comparison to a fully supervised network. The paper is well written. It presents an interesting contribution to the weakly supervised segmentation of histological images. - Image patch size is set to 492. How was this value set? - Are the image processed in the RGB space? - Results are given on a patch basis, would it be possible to give some accuracy image-wise or glomeruli-wise ? Minor comments: - please specify what training set is used to train the fully supervised CNN, IS1 and IS2? - captions of Fig 5 could be improved (eg (a) stage 0, (b) stage 1 (c) and (d) stage 2). The general caption could also reflect better the figure content. Although not a big deal, this might help when skimming through the paper. - in Fig 4, one can see the nomber of FP decreasing, however one cannot distinguish the evolution of green/blue areas (too small). - typo in conclusion: automation (...) demandS The 'cues' are specific to the application at stake and rely on some hard-constrained statistics, which are assessed form the data. They seem be too constraining especially in Stage 2, as acknowledged by the authors. """,3,1 midl19_33_3,"""This paper proposes an iterative two-stage approach based on weakly supervised and unsupervised training stages for histological image segmentation. The paper is clear, well organized, well written and easy to follow. The contributions are clearly stated and a thorough literature review is presented that allows a good insight into the stated contributions. Clever design choices are made such as clues 1 and 2 in stage 1 and 2, respectively, that allows improved promising performance. Evaluation on the test images (IS3 - 5 WSIs) is missing and should be included to give more insight into the proposed approach. Section 4, Stage 2: Why sixth iteration of Stage 1 is used? Comment on this selection? And why 10th iteration is not used/reported where STD is comparatively low? Figure 5: Missing detailed caption making it difficult to follow the four plots. Explain what each plots are. For each plot, use the same range for y-axis especially for (b), (c), (d) probably from 0.2 to 1.0. Page 9, line 1: The scores of Stage 2 (Fig. 5(b)) --> The scores of Stage 1 (Fig. 5(b)) """,3,1 midl19_34_1,"""1. Finding appropriate mixing ratios at the layer scale for multi-task model merging in an adaptation stage is a novel approach 2. The model was tested on an appropriate dataset and shows an improvement over previous methods 3. The paper is well-written and clear 1. It would have been valuable to see the distribution of alphas (mixing ratios) that were learned in the adaptation stage for the experiments 2. How does fine tuning affect the optimal mixing ratios? Is alpha still close to the optimum after fine tuning? One imagines that iterating between the adaptation and fine-tuning stages until alpha convergence could give a superior result 3. In the case of larger data sets (100%), the model shows only a marginal improvement over existing methods. There are no error bounds on the accuracies so it is difficult to judge whether this is a statistically significant difference Other issues: - Table captions should appear above table - Page 3, last line: Q is not defined - Introduction: cause -> because """,3,1 midl19_34_2,"""The main idea of this paper is to utilize the learned model from previous T tasks to help the T+1 task. In this paper, the author used the brain segmentation tasks to leverage the brain tumor detection task. The topic is useful for the community and the idea is interesting. The brain segmentation tasks are used as T tasks to help the brain tumor detection as T+1 task. The adaptive weighted strategy was proposed and evaluated upon the equally weighted strategy. The network is split into task-shared layers and task-specific layers. The multi-stage (adaption and fine tuning) method is much easier to train compared with single-stage training. Only three tasks (basically 2) are employed in this study. Therefore, the multi-task learning idea could outperform the T-IMM. For example, if we have M output channels for segmentation and N for tumor segmentation, we can define a single U-Net with M+N output channels to learn from both tasks. Unfortunately, such strategy was not evaluated. The method used the Fisher information to initialize the parameters for T+1 tasks using Eq.2. To make it work, all T tasks and the T+1 task should be similar to each other or with the similar hidden true distribution. However, the T+1 task in the paper is brain tumor detection while the T tasks are brain segmentation. That might not lead to good initialization using Eq.2 for T+1 task using T tasks. The actual implementation of Eq.3 is not clear in the paper. For example, how to optimize A in deep network? Is that an end-to-end training? In Figure 2, the batch-norm and instance-norm are defined as task specified S while the common layers P are convolutional layers. However, batch-norm could also be common layers while convolutional layers could also be task specified. The training size for Task 1 and 2 are relatively small for a good initialization for a segmentation network. Meanwhile, they are much smaller than Brats. Therefore, when using 100% data, the improvements are not quite large. 4% and 8% of Brats data are used in the evaluation to show the advantages of the proposed method. However, the Dice is relatively low (even better). So, a more meaningful and persuasive result could be, e.g., if we use 50% of Brats, the T-IMM hit 0.81 and much better than the traditional way. The overall performance of the proposed method has not been shown to be superior compared with the state-of-the-art performances on Brats. Also, state-of-the-art benchmarks are not evaluated. The training and validation details are not well described. For example, the network structure, epoch selection, hyper-parameter selection etc. Without such information, it would difficult for other researchers to use the proposed method. """,3,1 midl19_34_3,"""- Transfer learning and dealing with small datasets is an important area of research - The paper proposes a novel method, enabling pretraining on several different tasks instead of only one dataset (e.g. ImageNet) like done most of the times - Results show clear performance increase on small datasets - Proper experiment setup and validation - Clearly written and comprehensible - Code is openly available - Little comparison to other state-of-the-art methods for transfer learning. Only compared to IMM which is very similar to the proposed T-IMM. Comparison to (unsupervised) domain adaptation methods would also have been interesting (e.g. gradient reversal (Ganin et al. 2014, Kamnitsas et al. 2016)). - Method only evaluated on one dataset (BRATS). Often new methods are manually ""overfitted"" to one dataset. When used on another dataset they do not show gains anymore. The medical decathlon (pseudo-url) would have provided easy access to more datasets and tasks. Minor: - Testing for statistical significance is only shown in the appendix. It shows that for ""100%"" T-IMM actually is not significantly better than most of the other initialization strategies. This should also be shown in table 2. The way table 2 is presented at the moment it seems like T-IMM is better than all methods also for ""100%"". But the higher performance is not significant. - How is training till ""convergence"" (section 4.3) defined? - Not 100% clear if the IMM method used in the experiments is the method described in section 3.2 (alpha=1/T) ? - in section 5: ""Table 2 shows, that both IMM and T-IMM..."". I guess this should actually be table 4. - Figure 1 could have been a bit more clear """,3,1 midl19_35_1,"""This paper presents a stain-transforming cycle-consistent GAN for improving image recognition in histopathology. There are several contributions from this paper: 1. Presenting the GAN method for stain transformation, which is straightforward for GAN methods. 2. Introducing a modified overlapping strategy to remove tiling artifacts occured in patch sliding window approaches. 3. Cross center tissue segmentation has been validated using stain transformation. Overall, this paper is well written and experimental results are extensively validated. For the evaluation metrics on stain transformation, it was not elaborately explained, including SSIM and Wasserstein distance. Please cite related references and provide detailed explaination; In Table 1, the depth number affected the performance significantly, whether this is the same case for cycleGAN-baseline? The results show that with stain transformation, the augmentation did not improve the segmentation performance. This is quite interesting. From my perspective, augmentation and stain transformation are kind of complementary. Please show more cross-validation or cross-center results to draw the conclusion. """,3,1 midl19_35_2,"""Although the technical novelty of the presented work is not high (using a CycleGAN to ""style transfer"" digital histopathology slide images), this paper is an excellent example for using an established science or method in an applicable manner for a medical application. They have done it while designing, running and presenting a very well-thought-out set of experiments and evaluation metrics. I just enjoyed reading the paper! The presented results show that the the proposed CycleGAN achieves better performance than the common solutions, while boosting the performance of the segmentation network. 1. It's a known fact that in the medical field the lack of (training) data is a major limitation to evaluate new methods. However, in this work, the small number of centers (only two) and the fact that the method was only trained to transform 1=>2 and not vice versa is a big drawback. I urge the authors to collect more data, from more centers, while using cross validations to evaluate their method to the most. 2. The authors did not compare their results to the other CycleGANs methods they cited (Gadermayr et al., 2018; Shaban et al., 2018).""",3,1 midl19_35_3,"""The paper presents an interesting idea of using cycle-consistent GANs for stain transfer in histopathology. there are several minor contributions presented by the authors to the histopathology image analysis (mostly applications): + applying cycleGAN for stain transfer in histopathology for segmentation task; + sliding through whole image to reduce tiling artifacts (which are apparent when performing this task in tile-by-tile approach); + a limited but promising cross-center evaluation. The paper is also clearly written, the structure and the content is easy to follow. -- it is not clear why Wasserstein distance is chosen as a quality assessement to measure differences between image histograms? Elaborate on this, or provide reference for this choice? Why not any other distance between histograms? -- it is also not clear why the full results of the segmentation network are not provided, only average Dice overlap between all classes is provided. -- the presented validation is probably sufficient for conference paper, but it would be very interesting to see comparison to the papers cited by the authors for stain transfer - (Shaban et al., 2018; Rivenson et al., 2018). -- the authors did only one way cross validation: AMC to RUMC transformation due to limited training segmentation, so the conclusions should be some-how scaled back to the claims that are sufficiently supported in the paper. -- no quantitative results on comparison between tile-by-tile approach and the proposed sliding through whole image approach. minor: - provide full form of WSI in the text (not only in abstract) - section 3.1: a gap between ""twenty four"" is missing - SSIM is not a metric in the mathematical sense, please clarify. - SSIM has also two parameters, provide them. - the authors use CycleGAN, cycleGAN - make it consistent though the paper """,3,1 midl19_36_1,"""This paper proposes a deep-learning-based approach for dynamic pacemaker artifact removal. In the context of deep learning, the proposed method is somewhat novel. In point of view of medical imaging, the paper is very interesting and valuable. --The paper is not easy to follow. --The method is incremental and results are very limited. -- There are several hyper-parameters in your method such as patch size and etc. Discussion and more analysis of these parameters are needed.""",4,1 midl19_36_2,"""The paper proposed a deep-learning-based pacemaker artifact removal method. Different from standard MAR procedures, the authors segmented pacemaker leads in the projection domain, to address the perturbation on metal shadow caused by cardiac motion. The method showed good performance on dynamic pacemaker removal with solid validation. - The design of dataset-splitting 6:4:4 in supervised learning (sec 3.2) is reasonable, keeping datasets independent. - It is also thoughtful to consider the foreground-background class imbalance with patch sampling. - The implementation of a single decoder structure with skip connection of only the center slice information reduces parameters. The paper targets an interesting and useful research problem. It is of interest to see more details and discussions of this metal artifact removal pipeline. The paper is well written, except that some parts need a lecture back-and-forth. For example, an ensemble of 5 CNNs is detailed in Sec 4, helpful to understand the pipeline (Sec 3.3 (a)); Test data with pacemakers (Sec 2) can be confused with testing datasets with synthetic metal leads (might specify real test data, to distinguish from synthetic test data). For evaluation on real data, comparison with standard MAR approach is lacking (Fig 6). To evaluate newly introduced artifacts form false positives (Sec 4.1 end), real data without pacemaker or other metals would be a good scenario to test the models reliability and to quantify false positives. It is not clear if the projection data of size 128*672 are resampled to 20*20 as the input of U-Net for prediction. How does the resampling affect the accuracy? """,4,1 midl19_36_3,""" Summary: A method for metal artifact removal in CT is proposed using a U-net based CNN architecture to segment the metal artifacts in the projection data, after which in-painting is performed and a new CT image is reconstructed with reduced metal artifacts. The method is trained on clinical data in which synthetic pacemaker leads are introduced. In the end, the method is also tested on clinical data that contains real pacemakers. - The paper is understandable and clearly written - The segmentation method itself is not very novel, a U-net based architecture is used for segmentation. However, the application to sinograms is novel and very interesting. - The use of sinograms for segmentation has potential to generalize to other segmentation tasks. - The evaluation on a clinical dataset with real pacemaker leads is interesting, especially considering that the method is trained with data that contained only synthetic pacemaker leads. - It is unclear why different thresholds are used for background and foreground? (table 1). - Are the synthesized pacemaker leads based on non-contrast enhanced CT scans? In Fig. 2, the first picture looks like a non-contrast enhanced CT scan, is this true? If yes, why werent contrast enhanced scans used to synthesize training data. - If possible, it might be interesting to also show the Dice coefficient, sensitivity, specificity and AUC of the clinical test data with real pacemaker leads. This might give a better indication of how the method performs on real clinical examples. - Would the method also work on non-contrast enhanced scans? - How does the method deal with calcifications and stents in for instance the coronaries? - The use of the term data set is a bit confusing in combination with terms such as training, validation and test set, and reference, target and test data. Probably the authors refer to one CT scan and the corresponding projection data as one data set but perhaps in the future clarify what is exactly meant. """,3,1 midl19_37_1,"""This work proposes a reinforcement learning (RL) strategy for tracking elongated structures, specifically neural axon structures from two photon microscopy 2D images. The paper is well structured and easy to follow. Up to my knowledge, the specific idea of using reinforcement learning to track elongated structures given a seed point is original, and the authors propose an implementation using a state-of-the-art RL technique utilizing deep CNNs for the path predictions of the actor and the prediction of the expected risk in the value function. As such it follows recent successful ideas proposed by Mnih et al., 2015 (please note that reference Mnih et al misses the publication year), and adapts them to the tracking task. A strength of this work is that by using RL, the authors are able to train their tracking algorithm from an entirely synthetic dataset, and show that this trained agent is in principle able to solve the tracking task on their real-word two-photon microscopy image segmentation/tracking task, which depicts the axons of a mouse somatosensory cortex. Another strength of this work is the clear description of their algorithm (pseudocode), which makes it very likely for others to reproduce their results, as well as the fact that the authors promise to release their training data and code to the public in case of manuscript acceptance. In my opinion, there are a number of stronger issues, mainly regarding the experimental evaluation, that diminish the scientific value of this work: - While the overall idea of the manuscript looks interesting, the evaluation is very much limited regarding choice of dataset and general applicability of the proposed original concept. The introduction is written in a spirit that claims the proposed method being able to (quote) ""alleviate the need for hand-engineered trackers for different biomedical image datasets"". This is not shown in the experiments, since a single real-world dataset containing thin, elongated structures is used, after training from a synthetic dataset that looks very similar to what kind of information is expected to be visible in the real-world dataset. To evaluate the claim, that the proposed method is generic, and kind of a meta strategy for learning how to track thin, elongated structures, performance on different, additional datasets would need to be shown. As it is presented in the manuscript, unfortunately only a very limited experimental evaluation is given, from which I do not get the impression that a novel concept has been developed, but solely that a very specific problem has been solved with a method, which may be overly complex for the task at hand. - Reinforcement learning has been used for medical image analysis tasks, e.g. the work of Ghesu et al., PAMI 2017 and Ghesu et al., MIA 2018, where actually the goal of localizing landmarks by letting an agent follow appearance information in volumetric CT data until a multitude of different landmark locations is reached, reminds me a lot of the work proposed in this manuscript. While I see room for further exploration of these RL based concepts in medical image analysis literature, it is necessary to mention these approaches in the related work section and to discuss commonalities and differences. - In the introduction, the authors argue about tracking vs. segmentation to justify their tracking approach formulated in the RL framework. I do not fully agree with their arguments. I think by solving the segmentation problem robustly, tracking starting from a seed point - thus deriving e.g. structural information on the geometry of relevant data - would be trivial. Therefore, I think the authors miss the comparison with pure segmentation strategies for solving the task that they show in their evaluation. In the recent years, there was a lot of work in enhancement of vascular structures using deep learning based methods, both in 2D (optical retinopathy) as well as 3D. For example DRIU (MICCAI 2016), but also other works following up on that have provided state-of-the-art benchmarks for segmentation, which are able to overcome missing structures. This body of work is totally ignored by the authors. I would not consider the Vaa3D algorithm as a fair, state-of-the-art comparison to show the benefits of the proposed method. In addition, the proposed method is not able to outperform the Vaa3D method on this dataset, so the question has to be asked, what are the practical implications of the propsed method? (As stated above, there are no other use cases demonstrated, to show more generic applicability.) - The authors argue about subpixel accuracy, however, this is misleading. In my opinion, using the continuous outputs of the predicted actor displacement locations, which are modelled by continous distributions, they solely are able to operate in a subpixel environment. However, from their experimental evaluation, where coverage is defined in a three pixel radius and mean errors are slightly below 2 pixels for their method, and most importantly, the segmentation ground truth is defined on the pixel grid, I would not consider the outcome of their algorithm as having subpixel accuracy. - Another criticism of the evaluation is the fact that the authors state that they can finetune their method on ""a very small amount of labelled data"" to improve their performance, compared with solely training from synthetic data. However, in their experiment they fine-tune on three quarters (15 out of 20) of the available labelled datasets. Therefore, I would consider this conclusion as incorrect. Minor issues: - I think repeating the six numerical values in Table 1 is redundant, since those are already stated in the text. - From the results of the methods and their discussion in the paper, it is not clear what a relevant error would be for the downstream tasks regarding the two-photon microscopy image dataset. Is Vaa3D already there, or is a higher accuracy still needed? - In Fig. 1 and its explaining texts, it is not clear what are the start and manually labelled end points of the trackers, and what the different colors in the middle subfigure mean. - In 3.1, I do not fully understand the details of the - very important - reward function, giving the negative of the base reward if an action means there is a 90 degree or greater change seems to be a heuristic, please explain that in more detail.""",3,1 midl19_37_2,"""- This paper solves a tracing problem of thin structures by foregoing segmentation. - It gives more insight into the applications of Deep Reinforcement Learning (DRL) in bio-medical imaging and its related applications. - Results show comparable performance with existing standard software (Vaa3D). - Method uses a stochastic policy to measure the trackers uncertainty given its entropy, compared to traditional trackers that do not include this measure. - Authors evaluated the tracker on synthetic and microscopy dataset (Bass et al, 2017); synthetic data is generated by simulation of single axons fitted by polynomial splines to random walks in 2D space with Gaussian noise. How can one evaluate the quality of the synthetic data? We recommend authors to include a brief discussion on quantitative evaluation of synthetic images to real images. How does one account for bias? - DRL trackers are trained on synthetic (32,000) and validated on 1,000 samples. Hyper-parameters tuning set is quite small. What is the impact of using varying sizes for training and validation on the 20 2D held out test set? Unless there is some reasoning behind using a very set for validation. (50 -50), (70-30) splits performance would be of value. This may answer the question what is considered a reasonable amount of data to use in such settings. - Lastly, in the second stage of experiments; authors mention the use of microscopy data (Bass et al) for testing. With reference to the work of Bass et al, there are 20 test and 80 training samples i.e. 100 tiff files. Is there a reason authors did not fine tune the tracker on the train-set (80) but rather used a k-fold method on the same test set with (15 4 splits). Kindly clarify this issue, it would valuable to assess the performance of the tracker after finetuning on the set train-set. """,4,1 midl19_37_3,"""The authors describe a method to track biological structures using deep reinforcement learning (DRL) when there is little to no training data available. In this paper, the use case is microscopy images of cortical axons in mice. They employ the PPO algorithm, which is a state-of-the-art reinforcement algorithm, to learn policy and value updates using CNN. The model is learnt in two phases: In the first, they train the networks on a huge (32, 000) synthetic dataset, which simulates microscopy imaging, to learn hyper-parameters. This model is tested on 1000 synthetic images and 20 microscopy images. In the second phase, they use 4-fold cross validation on the 20 microscopy images to fine-tune the networks. They compare the results to a state-of-the-art tracking software, Vaa3D. The two measures used for comparison are coverage and mean absolute error. The following are the main pros of the paper: 1. The background description is clear and informative. They clearly distinguish tracking from segmentation. They clearly describe the reinforcement learning methods 2. The authors show that a DRL that is trained to track structures on a synthetic dataset can be generalized to real world biological data with reasonable accuracy. The DRL model does not perform as well as Vaa3D. However, the results do show value of synthetic training datasets to train DRL problems. 3. They introduce a new metric based on the entropy of their stochastic training process, which automatically indicates the uncertainty of the tracking. 1. The main issue of the paper is the re-use of testing dataset in phase 2 to fine-tune the parameters. From the description, it seems like the 4-fold cross validation in phase two uses the same 20 images that were used to select the best hyper-parameters in phase 1. Given that n=20 is very low, the improvement seen in phase 2 might have been due to over-fitting, since model has already been hand picked for this data. An alternative could be to follow through the 4-fold cross validation in both steps 1 and 2 while holding the subsets constant. 2. It is unclear what the authors mean when they state that this model approaches the performance of Vaa3D. The normal range of error measures such as coverage is not immediately evident. Perhaps comparison with a baseline method will be useful to show that this model is indeed close to state-of-the-art performance. 3. The claims made in this paper are very broad. The authors claim that this method could be extended to any tracking task. Further experiments need to be performed to ascertain this claim. """,3,1 midl19_38_1,"""- Great database. - The group attention module is an interesting idea - General poor presentation. The paper is very hard to follow. - Lack of comparison to algorithms specifically designed for lung cancer detection, such as Setio16 or Wang 18. - No indication model complexity between the proposed method and the alternatives. - all these algorithms neither making use of the spatial relations between slices false. Setio16 does, since does planar-reformatting. Wang18 does, since they use 3D volumes. - The reviewer wonders how other better performing detection networks compare to the proposed method, such as YOLO9000, which outperforms candidate-selection-and-classification-networks being a unified framework. - The proposed method showed superior sensitivity and fewer false positives compared to previous frameworks. That statement does not hold from Table 3. The sensitivity is only superior for ggn nodules. There is no overall sensitivity column in that table. This reviewer believes that the method is non-superior to the state-of-the art, it is less sensitive and has less false positives, this means that it is operating at another point of the ROC. """,2,1 midl19_38_2,"""In this work, the authors present to leverage an attention mechanism to single-shot detector networks and to apply it to pulmonary nodule detection. They emphasize their world's largest dataset of CT scans with annotations of varying types and sizes of pulmonary nodules. Methodologically, the main contribution of this work is to design a group attention network that can be injected in the existing network architectures. While the reported performance presents the superiority of the proposed method, the detailed information in implementation is missing such as the complete network architecture, number of groups in group convolution, loss functions, etc. To better justify the effectiveness of the proposed method, it is highly recommended to experiment over the LUNA16 dataset and compare with the scores listed in the leaderboard. Table 2 and the statement of ""~ using the GA module could help the model learn more important feature layers."": Regarding these, it would be interesting to see the weights estimated by GA modules and how those affected the performance.""",3,1 midl19_38_3,"""1. Very good database 2. The proposed group-attention mechanism is not only novel but also make sense. The authors successfully build an attention-based detection model in the pulmonary nodule detection. 3. The proposed model works well in the proposed large dataset. 4. The authors indicate the proposed group-attention mechanism works! There may need more comparison experiments. There are also some minor writing issues.""",3,1 midl19_39_1,"""The paper identifies import problems in AI based (medical image) analysis, and presents novel idea to deal with the notion of rotation invariance. The idea is inspired by the success of ORB (Oriented fast and Rotational Brief), and provides an interesting approach to equip CNNs with a method to deal with rotation invariance/equivariance. The experimental results are in favor of the proposed CNNs over classical CNNs. This section contains all minor and major comments and suggestions. Overall, I do not recommend to accept this submission based on the following: citations are sometimes incorrect or missing and I find some statements in the manuscript to be disrespectful the authors makes claims which are not supported by the paper, nor by citations the core method itself is not clearly described (Im left with a lot of unanswered questions). [Introduction] The introduction starts good, it is quite ambitions and identifies some core challenges in AI based image analysis: there is still a lack of knowledge on the emergence of concrete interactions within the network, ..it might be advisable to optimize the networks choice of transformation itself... However, I do not see why some of the challenges are mentioned, or how these are solved. In fact, it left me a bit confused and it gave me the impression that it was somewhat contradictive. A lot of it regards invariance to unknown transformations and it is suggested that you should not assume too many invariances a priori. It is suggested that in this paper these problems are addressed on a generic level, however, I found the paper to be highly specific to rotation invariance only (an a priori choice), which contradicts the grant view posed in the paragraphs before. The paragraph on encoding rotation invariance into NNs is a bit weak. It misses some key publications. For a very recent overview see e.g. [1]. Related work on spatial transformer networks (see e.g. [2]) and group theoretical approaches are missing. For theory see e.g. [1,3,4]. For successful applications (and theory) in medical imaging see e.g. [5,6], in addition to the ones you already have. Furthermore, when citing work on steerable filters in CNNs I think [7] deserve an explicit mention as well. I also dont think it is correct to say that Weiler et al. 2017 is based on the work of Jacob and Unser (they are not even cited in Weiler et al. 2017). Jacob and Unser have made great impact in the computer vision field with steerable filters, and I think they do deserve a cite, but when doing so, I think it is fair to also acknowledge the ones that actually came up with the notion of steerable filters in computer vision in 1991: Freeman and Adelson [8]. I also found the statement in the last paragraph on page 2 dubious: While a) *unintentionally* reduces the effective network capacity. What is meant here? As far as I understand from the above mentioned methods is that the transformation invariances/equivariances are explicitly made part of the network architecture such that the networks do not need to learn for geometric relations. E.g., the network does not need to learn rotated copies and network capacity becomes available for learning tasks specific representations, thereby increasing performance. I think this effect is neither unintentional nor does it reduce network capacity, it in fact increases it. [2. Materials and methods] (punctuation is missing, in particular (3) and (4) should end with a .) Typo on page 3 the belief that data amount might, data amount->the amount of data? The last sentence of the intro of 2: With our approach a matter of the network optimization process? I dont understand this statement. From my point of view it is more a matter of network design than a network optimization process. The sentence before (3) is logically not correct. The moments do not allow for rotation (you can rotate patches without any need for moments). A representative orientation can be derived using the moments and this orientation can be used to rotate the patches. Eq. (3) describes a way of estimating a global patch orientation. I expect however that this angle is highly sensitive to global (low-freq) intensity variations. The given angle is essentially the angle that the center of mass makes w.r.t. the origin. If this center and the origin coincide this angles does not even exist, and if they are close I expect a high uncertainty on the angle. generalization to the n-dimensional case is straightforward, such statements should not be made lightly unless you have a good citation to support this. Already in the case n=3 youd have to make decisions in how to couple orientations (in S^2, which has 2 parameters) with rotations (in SO(3) which has three parameters). Further approaches could include a learned transformation. Also here I think such extensions are not trivial at all and do not follow directly from the work presented here. I have several problems with this: 1. You do not provide any details to support this statement (except for a reference to section 4 which basically repeats this statement) 2. You could of course try to learn this transformation matrix (which is essentially happening in spatial transformer networks (STNs) [2]), but I understood that the whole point of using matrix (4) was that it is parameterized by a rotation which you know (or at least are able to estimate), you cannot estimate more general parameters in the same way you estimate orientation. Eq. (5) gives a description of the rotationally invariant layer, which includes a convolution of weight matrix W with an image patch. To me this equation is confusing since the difference in notation between weights (capitalized) and patches (lowercase) suggest that they are different data types. You could clarify (5) by also mentioning the size of W (which is k x k?). Am I right that (5) specifies a fully connected layer? (the patches have the same size as W) Lets assume p is larger than the convolution kernels. Then I have a problem with interpreting the framework: The rotation invariance is global (first the entire patch is rotated and then a standard conv is applied). In the next layer, the full input is rotated and then again a conv is applied. How does this work on full image level, e.g. in the U-nets? If now again the full image is rotated then you completely destroy locality property and structure of the convolutions. E.g. a rotation of 180 degrees moves a pixel all the way to the other side of the image, and if the feature maps are rotated independently of each other you completely miss spatial correspondences. I am probably missing something here, but this is as much I can make from the presented methodology. A clearer, less ambiguous, explanation of the methodology would have been helpful. This brings me to my next question. You choose to go for a hybrid approach. What happens if you go fully rotation invariant? (non-hybrid) What happens if you do not additionally provide the values sin cos as extra feature maps? Why are the orientations position dependent? Eq. (3) describes a global orientation estimate (independent of x and y). Perhaps the orientations are indeed determined at each pixel location, but then how would you define the convolution of (5); maybe rotate each kernel locally? This would make the conv layer highly non-linear and computationally very expensive. Either way, essential details are missing Section 2.3.1 on page 6, for three iterations with varying. Perhaps you can rewrite this sentence since iteration typically refers to one optimization step. Perhaps you can say, we repeated each experiment 3 times with different initializations (at least if this is what is meant here, otherwise I dont understand the sentence). [4. Discussion] Though there generally is some understanding on how rotational invariance can be realized. This sentence undermines the work by many others in this direction. The last couple of years has seen quite some contributions to rotationally invariant and equivariant networks (see cites below). The general theory is well understood from a group theoretical point of view [1,2,3] and it has seen great success in medical imaging (see the cites in your manuscript and e.g. [4,5]) in terms of performance, network capacity/complexity and use of limited training samples. To say that there is some understanding is in my opinion a severe understatement. [references] [1] Cohen, Taco, Mario Geiger, and Maurice Weiler. ""A General Theory of Equivariant CNNs on Homogeneous Spaces."" arXiv preprint arXiv:1811.02017 (2018) [2] Jaderberg, Max, Karen Simonyan, and Andrew Zisserman. ""Spatial transformer networks."" Advances in neural information processing systems. 2015. [3 ]Cohen, Taco, and Max Welling. ""Group equivariant convolutional networks."" International conference on machine learning. 2016. [4] Kondor, Risi, and Shubhendu Trivedi. ""On the generalization of equivariance and convolution in neural networks to the action of compact groups."" arXiv preprint arXiv:1802.03690 (2018). [5] Bekkers and Lafarge et al. Roto-Translation Covariant Convolutional Networks for Medical Image Analysis. In: MICCAI 2018 [6] Veeling, Bastiaan S., et al. ""Rotation Equivariant CNNs for Digital Pathology. In: MICCAI 2018 [7] D. Worrall, S. Garbin, D. Turmukhambetov, and G. Brostow, Harmonic networks: Deep translation and rotation equivariance, Preprint arXiv:1612.04642, 2016. 6, 8 [8] W. Freeman and E. Adelson, The design and use of steerable filters, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 13, no. 9, pp. 891906, 1991 """,2,0 midl19_39_2,"""- Address an important topic could we train the networks by learning the data augmentation policy itself? Can we generate rotationally invariant networks? - Attempt to generate rotationally invariant networks through a orientation-normalization patch-based layer. - Outperforms detection on CIFAR-100 and STL-10 - Better AUC when predicting tumor growth - Similar results when doing segmentation - The presented method is a combination of standard convolutions and rotationally invariant layers. It would be interesting to have a unified framework. - As the authors acknowledge, their method applies a per-patch rotation correction, while there is a lot of rotation covariance in natural images, thus the need of a hybrid network. - Unclear if the authors used data augmentation when training the standard networks. One of the key motivations in the work is that using the proposed method one could avoid costly data augmentation techniques. However, such statement is not demonstrated empirically. - Discussion incoherent with results in the presented results the method does not always outperform classical convolutions. o For CIFAR-10, results are the same. o For tumor growth prediction, while having a higher AUC, the accuracy is much lower (how was the threshold selected?). It would be of interest to see the ROCs, since sometimes AUCs can be misleading due to early mistakes. o DICE results on liver lesion segmentation are very similar and likely will not pass any statistical test. - Presentation: the second paragraph of the introduction seems out of place. Houndred -> hundred """,3,0 midl19_39_3,"""0) Summary The manuscript proposes a methodology to locally normalize for rotation using image patch moments. The algorithm is applied to five 2d datasets: three public datasets from natural image classification and two (vaguely described) in-house datasets: one derived from volumetric CT data (mCRC) for classification and the other for liver lesion segmentation. 1) Quality The paper uses moment-based local patch normalization, which is a conceptually simple idea. 2) Clarity The technical content is well accessible. 3) Originality The idea of locally normalizing the 2d patches using image moments seems to have not been explored before. 4) Significance The rotation invariance property is important in DNNs. Hence the proposal has a certain value. 5) Reproducibility The method is evaluated on three public datasets using a standard baseline. 1) Quality A proper comparison in terms of runtime is missing. The performance is only evaluated up to a certain size of the dataset which makes the overall judgement difficult. How do the plots in Figure 2 look for growing #Samples? Also, it is unclear whether the improved performance in some cases is really due to the rotation invariance. Control experiments with randomly rotated images are missing. 2) Clarity There are some typos. - Abstract: ""continiously"", ""explorative tasks, realistic"" - Intro: ""perceived by human"" Certain parts of the paper can be shortened to meet the soft 8 page limit e.g. the discussion of invariance/equivariance and the sometimes vague description e.g. paragraph before section 2.1. 3) Originality The line of research around spatial transformer networks [1] is not discussed. 4) Significance The possible impact of the paper is limited by the fact that the evaluation is done on 2d datasets only and -- more principled -- the approach only provides local rotation invariance which is only a small step towards proper rotation invariance. The results on medical datasets are somewhat inconclusive. 5) Reproducibility Two datasets are not public and the implementation is kept closed which renders the results pretty hard to reproduce. [1] Jaderberg et al., Spatial Transformer Networks, NIPS 2015, pseudo-url """,2,0 midl19_40_1,"""- The authors address the highly complex but highly relevant problem of detection and classification of perivascular spaces and lacunes. Although the problem is very difficult, due to rater uncertainty and high class imbalance, the authors achieve reasonable sensitivity. - The authors present interesting improvements, both in network architecture and in training approach, that could be beneficial to other applications. Especially the use of the multiple raters is original. In addition, the sampling based on distance maps is interesting. - For interpretation of the results, some baseline models are missing. I appreciate that the authors made an effort to train such models. Unfortunately, these models gave no results. - The problem is complex, but the authors made it even more complex than needed by combining two tasks (EPVS and lacunes) and by incorporating multirater information. It would be useful to see the models (and reference models) performance on the separate problems of EPVS and lacunes. - The authors did not assess the added value of the multirater encoding. It will be insightful to include for example a baseline method trained on a single rater. - The authors did not assess the added value of their dedicated sampling strategy. - The test set is very small, consisting of only 2 subjects. The validation is therefore quite limited. - Figure 7 is hard to interpret. What is the meaning of all the blue (=uncertain) Nothing boxes? Why displaying the blue boxes if they should be disregarded? Although validation can be improved, this work is highly relevant and gives many interesting leads for discussion at the MIDL conference. """,4,1 midl19_40_2,"""This paper presented a redesigned the RCNN model to detect and classify extremely small objects in MRI. This paper also presented an experiment results to show the good sensitivity of this methodology. General comments: * Grammar/sentence revision and proper introduction in the background can improve the readability of this paper significantly * I would also suggest to position in the Figure at the beginning of the explanation rather than at the end. The picture can better guide reader understand the specific content. Such as Figure 1 and Figure 2. * I think the paper is too specific and maybe hard to generalize to other datasets or problem * The data is specific, I dont see any discussion around the resolution of the image, noise level of the image * The parameters are specific: to me, there are quite a few adhoc hyperparameters * The dataset is small. The data description and the data cleaning process is not clear to me. Specific comments: * Motivation: * I recommend the author add more information on the significance of doing ESO detection * Specific examples/articles to support 'these markers reflect tissue damage and need to be accounted for to investigate the complete phenotype of complex pathological pathways' * Please elaborate the purpose of the Fig 1 along with the meaning of the red dots. * Please explain the reason why the HighResNet was selected Backbone network. * In the DL equation, * what is the meaning of the r_n? Is that a single number distance or a vector of 3 * what is the unit for the distance r_n?Does that cutoff change as the resolution changes? * Is the scale factor the solution to deal with resolution changes? * 'All input data was bias field corrected, skull stripped, and then z-scored to the white matter region statistics'. * Was the white matter region known at the point or some estimation was applied * On page 5, dont understand 'the skeleton maxima of the smoothed regressed distance map (p score map >0.25) * I am confused with sec 3.1. I am assuming 'Out of the initial 4147 considered elements, 2442 were used as gold standards for training. means for all 16 subjects, 2442 elements was used for training and testing. Among 2442, the elements belongs to 14 subjects were used for training, and the element belongs to 2 subjects were used for hold-out testing. Please confirm or clarity. * Not clear to me what is the input of this algorithm? * I dont understand Figure 7, where is the ground truth? The first row boxes in Lacune and undecided look exactly same to me """,2,1 midl19_40_3,"""This paper addresses a class of difficult problems MRI neuro image analysis, which is the detection and localization of small anomalies (such as parivascular spaces and lacunes) in MRI. The features are important in studies/analysis of, e.g., cognitive decline with aging (e.g. ""vascular dementia""). The paper is moderately well written (with a few grammar problems), clear. The evaluation is only moderately strong. This is an application of a well-known neural-net architecture (RCNN). The authors extended the NN to 3D (but little is discussed on that topic) and there were some challenges in messaging the training data (e.g. dealing with asymmetries in numbers of examples), multirater ground truth, and a few other details. Overall this is a straightforward application with a moderate evaluation, suffering from a small sample size, and a lack of evaluation of different design choices. """,3,1 midl19_41_1,"""The paper presents a novel methodology to reconstruct ultrasound images from raw echo data. In particular, a neural network model is trained to learn optimal transmitter (Tx) and receiver (Rx) beamforming patterns for fast (high-resolution) ultrasound image acquisition. I think the idea of designing an end-to-end learning system from Tx signal generation to Rx image reconstruction is an interesting approach, and the authors formulated this in a very nice way. Additionally, the paper is very well written. Especially, the fundamental concepts of ultrasound imaging is presented in a clear way as such wider audience can easily follow the content of the paper. Some minor points -- 1) I and Q components should be explicitly specified: in-phase (I) and quadrature (Q) components of the echo signal 2) Page 3, please specify parameter t. 3) Page 5 - Tx BF formulation. There is no dependency on j index on the right hand side. Please correct the formula. 4) Page 6: What is the momentum optimiser ? e.g. Adam optimiser can have a momentum term in order to provide smoother parameter convergence (less oscilations in the search space). Is it what is meant? 5) Typo - Page 9 intitlization Major comments -- 1) The presented solution is based on a neural network architecture. However, the paper does not specify in anywhere how this model and results can be reproduced, and what the building blocks of this architecture are? (convolution parameterisation? recurrent models?) 2) Please specify that the proposed BFtransform does not have any trainable parameter, but it is introduced to allow gradient flow from Rx BF to Tx BF. 3) Figure 2 - Please specify why the Tx BF layer parameters are shared across both I and Q components? Transducer or hardware limitations? 4) Figure 2 - What are the parameters of the reconstruction network ( ? It is described as an autoencoder but how is it formulated? 5) The presented approach is evaluated on only a single clinical data. Why not use leave-one-out cross validation? 6) I think the title (""Learning Beamforming in ultrasound imaging"") is very assertive given that the approach is evaluated on a single cardiac ultrasound scan. I would recommend that the authors reconsider updating the title of the paper to better reflect the content. """,3,1 midl19_41_2,"""The paper presents what appears to be an interesting approach for beamforming in US imaging. The work proposes a novel way to construct ultrasound images by learning both transmission (Tx) and reception (Rx) parameters of a US imaging system, where the parameters are modelled as a two path (Tx and Rx) neural network which is trained end-to-end. The manuscript is well written and more or less well structured. Although an interesting approach, the manuscript really lacks in way of validation. The very small data set used in this work contained only 6 patients with 4-5 cine loops per patient (the total number of cine loops is not mentioned) and each sequence contains 32 frames. To validate the methodology a single sequence from a patient was left out, which to this reviewer's understanding means there are still 3 or 4 sequences belonging to the test subject in the training data. This is of great concern for two reasons. First, the network could be simply learning the anatomy of the patient visible in the other sequences. Second, even assuming that no sequences of the same patient where left in the training data set, a single test sequence is not nearly sufficient way of validation, and hence the methodology efficacy cannot really be concluded. On a different note, the manuscript is missing key implementation details, e.g. the network used to simulate and train the Rx and Tx parameters is never discussed and left only as boxes in a diagram. Images that are (presumably) meant to highlight the results can be hard to interpret, e.g. some ovals are marked in the ""ground truth"" images, however they are never mentioned or pointed at in corresponding locations at the result images, which again makes the reader struggle how to best judge the proposed methodology.""",2,1 midl19_41_3,"""This paper presents a method for learning ultrasound transmission patterns together with the corresponding image reconstruction pipeline. The authors performed experiments on a small echocardiographic dataset. The results show an improvement of image reconstruction using the learned settings, compared to the standard procedure. The method and application are interesting and definitely relevant to MIDL. The methodology could also be useful to other imaging modalities. The results are good for a proof-of-concept. The paper is very well written except 3.3 paragraph Convergence, which should be revised. The evaluation is relatively limited. It could be improved on the following points: - The method is evaluated by testing on a single cine-loop (32 frames). At least a leave-one-out cross-validation should be performed. - The low-resolution acquisitions are simulated from the ground truth single-line acquisition images. Is this equivalent to actually performing an acquisition with the corresponding transmission profile? This should be included in the discussion, and ideally, one could perform a test by acquiring images with the learned parameters. - It is not clear why the reconstruction part of the network is pre-trained before training the transmission part, instead of training everything from scratch. This is probably why the network converges to locally optimal solutions near the initial beam profiles. - Differences in performance are described as significant, which would need to be backed up with a more thorough evaluation (using cross-validation). - Why is the L1-error for the DAS methods missing? It should be reported for completeness. - The image quality metrics used (PSNR, contrast) are quite basic and the authors do not explain how they relate to image interpretability. The following papers seem relevant, and could be included in the literature review: [A] El-Zehiry, Noha, et al. ""Learning the manifold of quality ultrasound acquisition."" International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, Berlin, Heidelberg, 2013. [B] Abdi, Amir H., et al. ""Automatic quality assessment of echocardiograms using convolutional neural networks: Feasibility on the apical four-chamber view."" IEEE transactions on medical imaging 36.6 (2017): 1221-1230. Finally, is there any potential application of the method beyond ultrasound imaging (other types of sensors)? Minor comments: Page 3: - 4-MLA is not defined it is only defined on page 6 - BFTransform is not used later, Rx beamformer is used instead. Layer names should be harmonised throughout the text and with Figure 2. - The notation for the raw signal should be phi without a hat. - The standard symbol for seconds is s, not sec Page 4: - Figure 1 (right) is Figure 2 - finite-sample approximations Page 5: ground truth Page 7: - Text in Figure 3 should be in bigger font - In Table 1 the results for 10-MLA are reported twice. This could be reformatted to save space and include more results. Page 8: - Table 4 is Table 2 or 3 - Y-axis labels are missing in Figure 4 (bottom) """,3,1 midl19_42_1,"""This paper proposes a novel data augmentation approach based on the superpixel representation of the image. In particular, the authors generate superpixel parcelation of the training images and add a term to the cost function that penalizes classifier (segmentor) errors when applied to the superpixelized image. The authors evaluate their approach on several biomedical image data sets and demonstrate robust improvement in the segmentation accuracy. The paper offers an interesting idea that others in the community might find useful to improve the robustness of their models. The innovation is relatively minor.""",3,1 midl19_42_2,"""This paper proposed a new data augmentation method using superpixels (SPDA) for training deep learning models for biomedical image segmentation. The proposed method can effectively improve the performance of deep learning models for biomedical image segmentation tasks. The experimental results are detailed and solid. Some technical details are missing.""",3,1 midl19_42_3,"""This paper proposed to use super pixel/voxel to do data augmentation. Authors conducted comprehensive experiments and evaluation to show the effectiveness of the method. Technical novelty not very significant, but still it is a good study overall.""",3,1 midl19_43_1,"""This nicely written and enjoyable paper proposes a loss function focusing on boundary errors as a complement to classical regional scores of segmentation overlap and presents experiments on data This paper is well written and easy to follow. The motivation of the proposed work is well presented and the adopted method clearly detailed and well illustrated. Results are very encouraging and the potential generalisability of the use of this additional loss term is high increasing the potential significance of this piece of work. A few points remain questionable and would benefit further clarification Methods - From equation 5 it seems that the absence of segmentation output will yield a null value for the loss. Is this a truly desirable behaviour? - In case of multiple, potentially coalescing objects and/or when the border function is of a complex shape, the closest point in distance may not be the appropriate one to consider for the comparison. Is an object defined constrained possible in that situation? Experiments - Could you please confirm that in the validation set for the WMH challenge, elements from all three scanners were used? Could you give the range of lesion load in these cases? Was it chosen to reflect the existing distribution? - Although the choice of 2D may seem reasonable for data with highly anisotropic resolution as in the ISLES challenge, this choice is more questionable in the WMH challenge where data is 3D. Moreover, the objects to segment being volumetric, the experiment would be much more interesting when going to the tridimensional complexity. - To complement the experiments, it would be interesting to observe the behaviour of the boundary loss alone and of a case of training with fixed weight.""",3,1 midl19_43_2,"""Summary: This paper considers an alternative to region-overlap-based loss functions. Their boundary loss considers the integral of the area between the ground truth and predicted regions. It seems this loss function is less affected by class imbalance in the image, as it produces accurate segmentations for small and rare regions in their example figure. Pros: - Novel idea for dealing with imbalanced classes. - Good reasoning for design of loss function. - Visualizations look nice. - Code is available online. Questions: - You argue that most people only consider regional losses. But what about Hausdorff distance? That is based on the distance between boundaries of regions. - Sec. 1) ""..these regional integrals are summations over the segmentation regions of differentiable functions, each invoking.."". What are the differentiable functions here? Cross-entropy loss? - Sec. 1) ""... [graph-based] optimization for computing gradient flows of curve evolution."" Can you explain a bit more what this is about? - Sec. 2) What do you mean with R ^{2,3}? That the image can be either 2D or 3D? - Sec. 2) You defined 'I' as a training image and then didn't use it. Wouldn't it suffice to just say that Omega is your space of images? - Eq. 1) I assume the subscript B in w_B is from 'background region', i.e. B = G? And not 'boundary' as the subscript B in (5)? - Sec. 2) I don't understand why you would use the notation ' G'. I would read that as 'change in the foreground region'. - Sec. 2) Is q_{ S}(.) unique? I can imagine that if G is not a circle (as in your example fig. 2), then multiple p would map to the same point on S. - Sec. 2) Is the signed distance between p and z_{ G}(p) Euclidean? - Sec. 2) Is the sign in the signed distance necessary to flip the sign of the area of S in the interior of G (the part ""below the x-axis of the integral"" as it were)? - Sec. 2) What is actually the form of the level set function G ? Pixel distance? - Sec. 2) If the 'boundary' is the sum of linear functions of s_{ then is its gradient constant? - Sec.3) Are you sure you are allowed to use ISLES and WMH for this paper? For WMH at least, there is a rule in the terms of participation that you are not allowed to use the data for scientific studies other than that of the challenge. - Sec.3.2) Why do you need to start with the regional loss term and then slowly build up to the boundary term? - Sec.3.3) I am now quite interested in the performance of { L}_{B} in isolation. Why did you not report that? - Sec.3.3) You argue that the boundary loss helps to stabilize the learning process. But isn't the change in noise that you observe in Fig 3. coming from a difference in scaling in the loss terms? That is, if the scale of the boundary loss is smaller than that of the regional loss, and you're gradually shifting towards the boundary loss, then I would expect smoother curves over time. Sec. 4) You say that the framework ""..can be trivially extended.."" to 3D. What would that entail? An element-wise product between the 3D pixelwise distance tensor and the prediction tensor from the network? Other comments: - Sec. 1) double use of the word 'common'. - Sec. 2) 's' in ""Let .. denotes.."" - Eq. 1) int_{p should be int_{ Cons: - The authors did not compare to other loss functions designed to handle imbalanced classes. These were mentioned in the related work section as relevant.""",3,1 midl19_43_3,"""This is a very interesting and engaging paper that is a worthy contribution to MIDL. The introduction of a boundary loss is highly relevant, and it nicely ties up an intuitive sense (that errors should be weighted by a distance map) with theory. I appreciate the mathematical rigour, the clear writing, the nice motivations, and of course, tying DL together with some important theoretical insights that increasingly seem to be lost in the DL era. --Motivation A thought that the authors might find useful as an intuitive motivation: volume grows as N^3 whereas surface grows as N^2. Thus, the boundary loss helps mitigate effects of unbalanced segmentations by reducing the order of magnitude of the effect of changes in pixel values for small segmentations. Minor -abstract could be tightened up; getting faster to the point would make it more engaging Clarity --readers would probably appreciate how you got to (4) Evaluation --It would have been nice to see experiments with only boundary loss. Why was this not done? Were there stability or convergence issues, or did it just not work as well? It's a curious omission, and I think readers would liek to know if the loss can operate on its own or if it only works as an auxiliary loss. At the very least, I think the authors need to address this within the text with an explanation. --Obviously an ablation study would be welcome, but I think for MIDL, the evaluation is sufficient. One thing I'm curious about: depending on how the distance map is calculated, e.g., with pixel distance, the boundary loss can add significant weights to each softmax, effectively increasing the learning rate. A hyper-parameter sweep on a validation set for both experiment settings would assuage any worries that the extra performance was due in part to the increased effective learning rate. Or perhaps there is a more principled way to do this. -Also, it would have been nice to have seen experiments with other losses, e.g., CE. """,3,1 midl19_44_1,"""- The authors investigate how the predictions of several standard convolutional neural network architectures trained for dermoscopic image classification change when artificial elements are added to the skin area before taking the image. This is certainly an interesting topic on which there is not much prior work in the medical domain. - Several network architectures and several types of attacks are compared. - The authors only investigated whether the confidence of the network is affected or whether the predicted lesion category is changed. However, it seems more logical that actual attacks would aim at changing the output in a specific way, for instance to a specific output category. These kind of attacks are not attempted and there are also no details on how the network output changes (do the networks all favor a certain category, i.e., if the category is changed due to an attack does the output always change to that category?). - Initially, the question is posed: Can physical world attacks from the clinical setting severely affect the performance of popular DL architectures? - I believe it would make the paper stronger if this would be toned down a bit. The answer is obviously yes since basically out-of-distribution examples are presented to the networks so that a lower/different performance is the expected result. I think it would improve the paper if the authors would instead just write that they are interesting in evaluating how such examples affect the performance. Minor comments: - In section 3.1, it is not clear what The fine-tuned architecture consists of refers to. Is this the architecture of MobileNet, or are these some additional layers attached to each network? - Class weights are mentioned, but please explicitly state how different classes were weighted in the loss function. - It is confusing that the datasets are described relatively late in the manuscript. I would suggest moving section 3.3 before section 3.1. - In the caption of Table 1, it could be explicitly mentioned why no experiments with red lines were conducted. - In the PADv1 dataset, how was the ground truth verified? - In the results section, I found it confusing that first the results of the attack experiments and thereafter the baseline results of the clean images are presented. - The caption of Table 2 could mention (preferably in words, not as formula) what the robustness score expresses. - When referring to Tables and Figures, the words Table and Figure should be capitalized everywhere. - It is not really clear why calculating a weighted accuracy is not possible for the PADv1 dataset. - In the discussion, the authors write We show small artifacts captured from the real world can significantly reduce the accuracy of DL diagnosis where dermatologists would not be impacted. - this should be toned down as well (e.g., would LIKELY not be impacted) since it was not actually shown in this work that dermatologists are not impacted in their diagnosis.""",4,0 midl19_44_2,"""This is an interesting paper presenting the discovery about physical attacks in dermoscopy. Robustness is very important in deep learning based methods. This paper studied the robustness and susceptibility of various deep learning architectures under physical attach. The experimental dataset is relatively small, which may be subject to vulnerability; Although the discovery is interesting, I would suggest if authors can propose some methods for increasing the robustness of deep learning methods, this is more insightful. In table 1, the authors listed several physical attach types for dermoscopy applications. Is it comprehensive or how much it's related to the clinical setting? """,2,0 midl19_44_3,"""The work looks into the interesting problem of evaluating robustness of DL models in clinical settings, with a focus on scenarios where adversarial examples are used. The research is novel as it explores the use of robustness to physical world attacks as an approach to model evaluation, which has not previously been investigated in the medical imaging literature. Overall, the paper is written clearly and the methodology is well designed. Long term, high impact application of the work is feasible. The work also makes publicly available a new dataset (PADv1), which would make it easy to reproduce elements of this work by others. -Title: -------- If the reader is not familiar with the ML adversarial attacks literature, terms such as 'physical attacks in dermoscopy' may be confusing at the first instance. Perhaps the title can be rephrased to help convey the message of the paper. Introduction: ------------------ - While medical systems empowered by Deep Learning (DL) are getting approved for clinical procedures ... Please support by referring to examples of such systems that have received approvals. - physical world attacks are constrained to changing the appearance of the region under consideration in the real world To ensure a robust argument is made for motivating the paper, please comment on how realistic such attacks are in clinical settings. If they are not performed by the clinicians themselves, the attacker would need to go through a great deal of, perhaps unrealistic, effort to draw on a patients skin, taking dermatology as an example. - Wherever there is money to be made, some people will exploit the opportunity and abuse ambiguities, which is shown by cyber threats ... Please rephrase. It doesnt read well. Methods: ------------- - It is unclear what the deep models were trained to classify. Were they initially trained to classify each image into one of the seven classes that were pathologically verified? If so, does the classification problem remain the same when using applying the models on the images from the new dataset? Please clarify. - All lesions are non suspicious for melanoma ... What is the significance of this? And also note that a large number of readers would not necessarily be familiar with terms common in dermatology. Results & Discussion: ------------------------------ - It appears that susceptibility is measured on a negative scale, i.e. the lower the number the more susceptible the system is. Please confirm and clarify in the text (not only figure caption) if this is true. -Accuracy on its own is generally not sufficient as an evaluation metric. It would be interesting to see how susceptibility and robustness metrics derived from, say, sensitivity and specificity of the models, compare to the currently reported observations. - Please elaborate on the limitations of this work.""",4,0 midl19_45_1,"""This paper investigates the problem of artefact in MRI images. Instead of trying to reconstruct a un-corrupted image from the corrupted one (or its k-space), which is the common approach - and may destroy important image information, here, the authors' stance is that for a specific task, to make its corresponding CNN model robust to the presence of such artefacts. The author investigate how introducing images with artefacts in training models can improve not image reconstruction, but a downstream task : segmentation quality. The hypothesis is that a good segmentation can be obtained even on artefacted images. Pros : Tackles a difficult problem, i.e. images with possibly big motion artifacts, which are typically excluded from medical imaging datasets. The authors propose a new fully 3D motion model of MRI acquisitions, more realistic than standards methods consisting of simple k-space sampling and mixing in 2D. The validation is extensive: the proposed method is tested on a variety of segmentations tasks (TIV, hippocampus, CGM), on both real and synthetic data, and on a test set containing both clean and artificially artefacted data, and with a panel of metrics (Dice score, positive predictive value, sensitivity and average distance metrics). The model provides a better uncertainty estimation for segmentation predictions of motion-corrupted data. The paper is clear and easy to follow. It seems that for each new segmentation task, new adapted artefacts volumes have to be modeled. But in the paper, there is no discussion of how long it takes to generate these artefacted volumes. For example, how long did it take to generate the 15 artefacted volume per scan ? The method isn't consistently better than other more standard augmentation (rotations...) as measured by 4 segmentation metrics (although authors do provide an possible explanation, which is that for the hippocampus - for which the augmentation model doesn't outperform classical augmentation-, the motion artefact model may not be well adapted.""",3,1 midl19_45_2,"""The authors propose a simple clever idea to improve segmentation performance for MRI images: simulating movements during MRI acquisition. They demonstrate significant improvements in on simulated and real-world images with movement artefacts. Furthermore they show that this kind of augmentation improves drop-out based uncertainty estimation. the paper is clearly written, the experimental setup is convincing. - the presentation of the results could be improved. E.g., I found the axis labels ""benchmark"" and ""clean"" and ""Against"" quite confusing - A reference for the ""benchmark"" method is missing. - Several references are incomplete (e.g., Arxiv identifier missing, or no source at all for Pawar et al.) - Figure 6 is very hard to interpret. """,3,1 midl19_45_3,"""Authors propose an image resampling based approach to do data augmentation (motion artifacts) for MRI images, based on which, the trained deep segmentation network shows better performance on artifacted data. The paper is well written. The experiments are well done. 1. Section 3.1. How and why to overfit the model in training? Usually overfitting means low generalization ability. 2. Besides classic augmentation, authors may want to compare the motion-based augmentation with some other basic techniques, like median filter to simulate the blurring artifact. 3. There may exist some previous approaches about generating MRI motion artifacts, authors need to discuss the novelty/difference of the proposed method against them. In addition, it looks like there is a similar approach of motion-based augmentation for deep learning based MRI segmentation: - Andersson, Erik, and Robin Berglund. ""Evaluation of Data Augmentation of MR Images for Deep Learning."" (2018). Its better to compare with it or discuss about it somewhere in the paper. """,3,1 midl19_46_1,"""The paper introduces a novel unsupervised method for lesion detection based on Normative prior. The paper is well-written, and the method is validated on a publicly available database showing an improvement over the state-of-the-art. While the proposed approach is quite interesting, the experiments do not rather validate the novel contributions. For instance, I was expecting to see the following experiments: 1. A comparison with spatial VAE (Baur et al. 2018) which is similar to the proposed method, however, with a single multivariate Gaussian mixture --> To validate the need of modeling the latent code as a mixture of Gaussians. 2. A comparison of GMVAE (w/o Image restoration) vs. GMVAE(TV) --> To validate the need for Image Restoration. For instance, n = 0 vs. n = 500 steps as reported in the paper. Further, I was expecting a section on the sensitivity analysis showing the following: 1. the influence of the number of mixtures 2. the influence of the number of steps in the image restoration (accuracy vs. time complexity) Apart from that, here are some questions/comments: 1. The network p(c|z,w) wasn't reported in Appendix A, so I was wondering whether it was implemented or not. Any observations, regarding the last term in Eq.2, similar to what reported in Dilokthanakul et al. 2016? 2. if Eq.2 is converged, then can't we detect outliers from p(c|z,w)? For instance, outlier pixels (regions) would have lower probabilities in all mixtures and should be easily detected. 3. Can't we use the MR distribution, i.e., WM, GM, CSF, and background as p(c)? 4. M in Eq.7 is not defined. """,3,1 midl19_46_2,"""The paper addresses the problem of brain tumour segmentation from an unsupervised viewpoint which is a very useful approach in medical imaging where data annotation is expansive. The method segments tumours as outliers from a learned representation of healthy images. The tumour detection is done by solving a MAP problem, where the prior distribution of data was approximated using Gaussian Mixture Variational Autoencoder (GMVAE). The data consistency term is optimized by using Total Variation norm. The paper is well written and clear. A nice summary of VAE and GMVAE is presented followed by the description of the contribution. Results are promising. The method is compared with few other deep learning based unsupervised methods and achieves good performance. The majority of the method was proposed by an earlier paper from the same group [Tezcan et al. 2017]. This paper applied that method to the new context of brain tumor lesion segmentation with small modifications due to different task. This group also had a similar paper in MIDL 2018 where they applied slightly different methods (VAE, AAE, as opposed to the GMVAE in this paper) on the same problem: pseudo-url This makes the contribution rather incremental. While experiments are good for comparing the method with other similar unsupervised methods, it is not shown how the method compares with state of the art on this competition data. This would help determining if this is practically a very useful approach. The experiments also lack some details which made it hard to understand: - Description of DSC-AUC wasnt clear. - Two of the baseline models VAE-256, VAE-128 werent described. - In the histogram equalization part, a subject was randomly chosen from CamCANT2 dataset as the reference. It was not shown in the paper whether its a sensitive parameter. Thus a potential issue here is that not knowing which specific subject was chosen might make it hard to reproduce the result. Typos: -Paper 2 top: patients making them attractive, -> patients, making them attractive -Page 8: in conclusion, line4, DCSs -> DSCs """,3,1 midl19_46_3,"""The authors propose a novel unsupervised anomaly detection and segmentation approach that utilizes Gaussian mixture variational auto-encoder (GMVAE) to learn the prior distribution on healthy subject images. The images containing anomalies are restored using the learned prior, incorporating total variation for data consistency, and the residuals are computed by subtracting the restored from the original image. By thresholding the residuals the pixel-wise anomaly detection map is obtained. The proposed method was trained on 652 images of healthy subjects and the anomaly detection approach applied to BRATS 2017 challenge datasets containing brain tumors as the anomaly. - The unsupervised approach is well motivated and literature review is extensive - The reconstruction methodology incorporating the normative prior seems novel and, in general, is described clearly and concisely - Method was applied to 2D image slices and does not take full 3D information into account - Results in Table 1, for instance the DSC_AUC, for brain tumor detection are rather poor compared to the equivalent Dice_WT score for most other tested methods (see pseudo-url) - Some validation metrics are not defined, for instance, DSC_AUC is not defined in section 2.4, but which, presumably, is obtained by maximizing the TPR-FPR value; please clarify - There are two approaches to residual map computation, but results are not reported consistently; namely, the signed difference based residual map calculation is proposed ad hoc at the end of results section 4.2, indicating much improved results, while unfortunately the results are not reported in the same manner as before""",3,1 midl19_47_1,"""The method presents a a method for cell detection in H&E stained histopathology images based on convolutional networks. The model predicts three maps, which are then combined and post-processed to get the predicted location of cells. The paper is well written and authors show that their method outperforms state of the art approaches on a public dataset of manually annotated cells in colon cancer, and claim that their method is faster than other methods that address the same task. In my opinion, the main contribution over previously presented methods is the introduction of the wt_map, because something equivalent to a combination of conf_map and loc_map was already present in other works, such as Sirinukunwattana et al. (2016). Additionally, the formulation of the problem as a multi-task approach is novel in this context, to the best of my knowledge. The proposed method shows improvements over state of the art approaches, but it is only tested on one single dataset, and only limited to H&E staining. It would be interesting to show, or comment, whether the same method would work for immunohistochemistry as well, where stain artefacts are presents, and detection of cells grouped in clusters is challenging. Additionally, only examples of positive results are reported. Reported F1-score is good, but not perfect. This should be discussed, for example show cases with failure, if there are common causes of failure, how to address them, and whether this relates to an imperfect reference standard. Would also be good to compare with one of the methods based on region proposal mentioned in the introduction, such as Faster r-cnn, or YOLO, which showed pretty good performance at lymphocyte detection (only limited to IHC) at MIDL 2018 (M. van Rijthoven et al., 2018). About quantitative performance, the authors claim that the performance ""largely improved"": from 0.879 to 0.886, and from 0.882 to 0.887. Is this considered a large improvement in this setting? Three maps are produced and combined to create an accumulator map, which is post-processed in order to obtain the final detections. Since the three maps are generated for the training set as well, did the author check whether this gives F1-score = 1.0 on the training set? I guess it does, but it could be that the contribution of Wt underweights some locations and lowers their final score. If this is the case, it would be good to assess the performance of the post-processing step on the training set as well (without using the model), which could be a good indication to understand the upper bound of the performance of this approach. Other comments: * The caption of Table 1 should be improved, it does not describe what is in the table (the description is in the text though). * How is the average accuracy computed? Only single pixels manually annotated as foreground and the rest as background? And how are the weights computed, if used, and the scores averaged? This is not clear from the text. * What type of functions are the losses? * lambda_1 and lambda_2 are introduced but then set to 1, authors could consider removing them from the formula. I acknowledge they mention investigating this effect in the future, but I wonder what is the utility of these two parameters in this paper. * A receptive field equivalent to the size of a single cell is used; would a slightly larger receptive field improve the performance, allowing to include some more context? * The architecture relies on an encoder-decoder model, but skip connections used in the U-Net architecture are not used here. Was this a specific design choice, and would using skip connection improve the performance of the method? """,3,1 midl19_47_2,""" - The paper is well-written, and easy to read and understand. - The authors consider the problem of nuclei detection, and propose to decompose the task into three subtasks, trying to predict the confidence map, localization map and a weight map. - I think the effort of disentangling a complicated task into simpler ones makes sense, and the experiments have shown promising results. - In my view, the proposed methods are not completely novel, I think the authors are suggested to cite them, just name a few. - Predicting the confidence map with fully convolutional networks was initially done by : ""Microscopy Cell Counting with Fully Convolutional Regression Networks"", W. Xie, J.A. Noble, A. Zisserman, In MICCAI 2015 Workshop. - The proposed localisation map is actually the result of distance transform, and has been initially used in : ""Counting in The Wild"", C. Arteta, V. Lempitsky, A. Zisserman, In ECCV 2016. """,3,1 midl19_47_3,""" 1. The method part is well-written and easy to follow. 2. The authors formulate the problem into a multi-task learning framework which regresses centroid location, confidence map and classifies pixel-wise label simultaneously. The method makes sense that high-correlated subtasks benefit learning mutual information. 3. The vector oriented confidence accumulation generates the accumulator map with the sparse response. It reduces the sensitivity of hyperparameter-radius' value in NMS, which may benefit in final prediction especially in densely nuclei cases where the proper radius value is hard to define. 4. The experiment results are good on a publicly available dataset which is persuasive. 5. Paper is also clear about reporting hyper-parameters for reproducibility. The experiment analysis is not clear in some parts. 1. In Sec 5.1, the authors use the pixel-wise evaluation metric. Though it explains the mutual benefit to some extent, it is best to also provide the results on final metrics (F1, Median Distance). 2. There is a mistake in the explanation (the second paragraph, Sec. 5.1). The smooth-L1 loss is a combination of L1 and L2 loss which is robust to the outliers. It is an L1 loss when the value is larger than the threshold, while it is L2 loss if value small than the threshold. 3. Fig.2 shows the probability map of the regression method. But the analysis is not very correlated to this image. Instead, it is better to show the comparison of (Conf+Loc+Wt) and (Conf+Loc), since the explanation is still ambiguous. Q: The L_loc value is about to equal to 4 according to the experiment, which seems to be a large value. How about the magnitude for L_conf and L_wt, since 1 and 2 are set to 1 in loss calculation? Will L_loc dominate the direction of optimization? """,3,1 midl19_48_1,"""The authors have proposed a SCNN-based brain tissue classification method using DMRI data. The idea of applying SCNN on fODF is straightforward and looks interesting. The paper is well written and it is easy to follow. But, I have concerns on the evaluation of the proposed method. - There is no comparison to other methods. To convince applying SCNN is promising, a comparison, as least, to a method with CNN on fODF directly could be helpful. There is one dice score from an existing study reported. Did this study analyze the same data? - I do not see why it is a good idea to train on a single subject. Including more subjects for training to account for individual variability should be a much better idea. - The performance on CSF is very low (around 60 of DC). Discussion about the performance on this could be helpful. - The HCP data provides tissue segmentation label map in diffusion space. What is the reason to perform reference labeling again? """,2,0 midl19_48_2,"""- definition of a novel CNN approach on the fODF, by applying convolutions to data that lives on SO(3). - geometric distortions in diffusion data are significantly larger than in traditional T1w and T2w data. I am not aware of any studies that would acquire diffusion data only and then employ that data for structural volume analysis. Thus, the need for tissue segmentation from dMRI is significantly lower and really only necessary for the purpose of masking/seeding in diffusion analyses. - diffusion MRI Segmentations are compared to FSL-Fast segmentations on structural MRI. That is NOT a reference segmentation as FSL-Fast can fail, be significantly imperfect at given parts in the images, etc. References should be of manual or semi-manual order. Evaluation data is small (16 subjects) - Since fODF segmentations are compared to structural MRI segmentation (where the latter is used as segmentation), the obvious question is why not solve this on the structural MRI """,2,0 midl19_48_3,""" SUMMARY The paper Spherical CNN-Based Brain Tissue Classification Using Diffusion MRI presents a neural network that classifies reconstructed diffusion weighted MRI signals into white matter, gray matter and cerebrospinal fluid. The network utilizes spherical convolutional layers (sCNN) with rectified linear units (ReLU) as activation function and ends with fully-connected layers that perform the classification task. Training is based on constrained spherical deconvolution (CSD) orientation distribution functions (fODF) as input and anatomical FAST segmentations as label of a single (human connectome project) subject. Evaluation is performed in an inter- and intra-subject manner within the HCP project. PROS The proposed approach utilizes a new and - for the field of diffusion imaging - very interesting method: the spherical convolution. CONS Unfortunately, there are numerous weaknesses, hence only the most serious ones will be covered here: (1) The chosen network input: In order to find a good response function (RF) for a CSD reconstruction, a meaningful white matter mask is required for big datasets due to computational constraints. Therefore, using CSD fODFs as input to predict the white matter mask, which is required during generation of the input signal, does not make much sense. Furthermore, it should be taken into account that the fODF was generated by deconvoluting the diffusion signal with a single RF. It should therefore be easily possible for a network to learn a convolution, while the plain diffusion signal can be utilized as input. (2) The networks structure: Main purpose of the sCNN layers is to keep the spherical signal structure from layer to layer. Since the goal is to classify the input, keeping the spherical structure does not seem important for a good classification. Furthermore, applying ReLUs to the Spherical Harmonic signal completely removes this spherical structure, since all values <0 are set to 0. Applying different activation function (e.g. sigmoid or tanh) would most probably keep the spherical structure, in case it might be beneficial for classification. (3) Evaluation: The biggest drawback of the current evaluation is that no other method was evaluated for comparison. The easiest way to compute a segmentation would be to apply FSLs FAST on the b=0 diffusion weighted signal. Another possible comparison would be a four-layer neural network with 16, 32, 128 and 3 neurons per layer. This would prove the possible improvement due to the spherical structure. The statement that the network can also be applied to other datasets/subjects needs further investigation, since the HCP Project is a very homogeneous dataset. To this end, it would have been important to evaluate other scanners, different resolutions and different numbers of gradient directions. For a proper evaluation, at least the resolution and the number of gradient directions should be evaluated, as these have a direct influence on the fODF. CONCLUSION This paper utilizes an interesting network structure for an important task within the field of diffusion imaging. Unfortunately, it doesnt get far with it. As the paper states itself, only preliminary results are presented. It would therefore be recommended to further improve this work. """,1,0 midl19_49_1,"""The paper presents an application of a recent few-shot learning algorithm (Guided Network) to the problem of lymph node segmentation in histopathological images. - no methodological novelty; the presented method is an application of an existing work (Guided Network) to a lymph node segmentation dataset. - no error bars are provided. For example, the model trained with (5 shots, 10 points) annotations performs better than the model trained with (1 shot, dense) annotations, which seems strange given that dot annotations are generated from dense annotations. - the details of optimisation are incomplete e.g. optimiser, learning rate, etc. - two of the requirements stipulated in the introduction are not empirically validated; 1) be collaborative and easy to use; 2) requiring minimal maintenance. Overall, despite the well-communicated relevance of the topic, the paper lacks both methodological novelty and empirical validation of its utility in the considered application. """,2,0 midl19_49_2,"""This paper addresses the problem of scarcely available dense manual annotations for supervised learning in histopathology image segmentation, and propose using parse annotations in a framework based on few-shot learning. The problem tackled by this paper is very relevant, because manual annotations are time consuming and very expensive, specially when pathologists have to be involved, whereas sparse annotations are easier to make. The title suggests that a framework for collaborative annotations, possibly by involving multiple users, is presented, which is a novel approach in the context of the Camelyon challenge and to metastases detection in lymph-nodes in general, to the best of my knowledge. The paper lacks clarity in the order and in the details in which components are introduced and applied, and several parts of the paper are difficult to understand. Furthermore, it is not clear where the ""collaborative"" part of the whole methodology takes place. The only point where this is mentioned is in the section about ""late fusion"", but I do not understand how new annotations added during inference can make the model collaborative. What would be a good use case scenario? This should be explained in the paper. Additional comments: * In section 3.1, a training set of s samples and a test set of t samples are introduced. What is the size of s and t, and what should be their order of magnitude to make this method effective? I guess t << s, otherwise one could just rely on dense annotations from the test set and use them for training. Experiments with different ratios t:s should be performed * What is the method actually tested on? What is called test set seems to be used during training, so there should be another set used for the actual validation of the method, but it's not introduced. * Figure 2 is not clear. I think the direction of the arrow between g and m is wrong. Furthermore, components are used here that are described later in the paper, which makes it very difficult to understand. Those components are actually introduced in the section about experiments, instead of in the method section. * Patches are labeled as lesion ""if at least one pixel in the center window of size 224x224 was annotated as lesion"". Is this the central pixel or any pixel in the patch? The way it's written it seems that it is any pixel in the patch, which I don't think is a good choice. * ""Bilinear interpolation for downsampling"" sounds a bit odd. * Table 1 and Table 2 show results with sparse and dense annotations, but it's not clear if dense refers to FCN-32. If it does, why are the numbers in the text different from the ones in the table? """,2,0 midl19_49_3,"""This paper presents a deep network that could screen large set of WSIs of sentinel lymph nodes by segmenting out the areas with possible lesions. It is hypothesized that such network can even help to correct and adapt its behavior from a limited set of examples, which is an important limitation in medical AI applications today. The idea of guidance is promising (however prior in DL is not a new idea), and combination of the guidance within the episode learning can be strong once its dynamics is shown in relatively more difficult problem where its additive value is proved. -- Almost entire paper is dedicated to describe the late-fusion technique and unfortunately there is very little description of early fusion. Its not clear how early fusion variation model is trained. More information will address the ambiguity. -- The function ""f"" (which integrates the representation into the second network) is not clearly defined. It would be necessary to clarify how this representation is integrated into the second network? --Although authors think the results are encoring, the overall results do not seem promising, and there is a lack of comparison with other methods. Hard to understand where this method stands compare to other available ones. --There is a lack of valid explanations/ justification on why the results for dense labels worse than that of 5 or 10 points ? Isn't that dense labels should be the ideal case of the sparse annotations? -- if human annotations and network output are overlaid, it could be easier to see where the mistakes are, but in the current form, it is hard to analyze images (see figure 3) --technical novelty is questionable, because few shot learning model is taken from Rarely et al 2018, and applied to histopathology images, and results are ""not"" presented in an elaborative way, therefore the application novelty remains questionable due to unjustified claims. """,2,0 midl19_50_1,"""*** Score revised in response to comments, see discussion below *** - Cross-modality registration is a relevant and challenging application. - Learning a shared modality-agnostic feature space and using segmentations to derive a weak supervisory signal for registration seems like an interesting and promising approach. - Although I cannot recommend acceptance at this stage, I feel the central idea has merit and should be pursued further. This work feels very preliminary and I believe the manuscript can be made much stronger by addressing the following issues for a future submission: ==== Method ==== - Fundamental assumption for the approximation in Eq. (1) is that displacements are small, e.g. sub-pixel scale. While it may be reasonable for computing (potentially multi-scale) optical flow between consecutive video frames, this condition seems unjustified for the sorts of deformations expected in intra-subject registration (e.g. Fig. 3)---even assuming feature brightness consistency. If this linearisation approach is in fact ""widely used"" in this context, please cite the relevant references backing the claim. - Otherwise, is the approximation applied iteratively as the moving image is deformed and resampled with small incremental displacements? If this is the case instead, I strongly suggest the authors clarify Section 2.1. - What is Delta(u,v)? Although it is a crucial element of the proposed pipeline, the actual output of the B-Spline Descent module is never properly defined. The paper could greatly benefit from a clear algorithmic description of all the steps involved. - What is the dimensionality of the feature maps fed into B-Spline Descent (M and F, and also the corresponding SDMs)? If it is greater than one, the equations as they are written are incorrect (see next point). The exposition can be made much clearer by defining the dimensions of all the variables. - Improper mathematical notation: x is undefined; some terms are missing x as an argument; unconventional partial derivative notation; missing energy summation over the coordinates (and feature dimensions?). I also suggest the authors switch to matrix notation for a clearer and more general formulation. - Careful with claims of ""disentanglement"", as it can lead to mischaracterisation of the present contribution. The authors can say the pipeline *decouples* a feature learning step from a deformation estimation step, but this is not a representation learning method (nothing wrong with that), so I'd also be wary of relating it to Shu et al. (2018), for example. ==== Evaluation ==== - The dataset description needs more information. How many distinct subjects are there and how many scans of each? Are the scans paired across modalities? Are they healthy or pathological cases? - Unclear why the authors used only 10 scans per modality, when the dataset seems to provide many more annotated scans for the chosen structures (pseudo-url). - Please clarify the 3D pre-registration step with deeds-SSC. Is it rigid/affine or deformable? What is used as the alignment target? - Especially with such small sample size, the averages in Table 1 mean very little without error estimates. I suggest the authors tone down the claims of ""significant improvements"" until more rigorous experimental analysis can be performed. - Missing baselines: How does the full method compare to the purely unsupervised B-Spline Descent? This experimental comparison would make the argument for external supervision much stronger, and would be a fairer competitor to the MIND descriptor. Furthermore, comparison to a traditional pairwise iterative registration method with multi-modal cost function (e.g. mutual information-based) would be greatly informative.""",3,1 midl19_50_2,"""- the paper is well written and tackles an important topic of feature interpretability for the purpose of image registration - the proposed SUITS approach builds on previous methods and is novel enough to excite the attention of the community - the description of the method should be replaced by a algorithm environment detailing the different steps. The way it is presented in the paper makes it a bit difficult to follow - as an additional comparison the authors should consider presenting the results of a CNN based method - although not directly related to feature interpretability, the work ""Deformable medical image registration using generative adversarial networks"" , Mahapatra et. al. , ISBI 2018 could be relevant as it uses GANs for multimodal image registration""",3,1 midl19_50_3,"""The article is well written and describes a complex idea with reasonable clarity. The topic is very relevant (multi modal registration using deep neural networks) and the idea is novel. The idea is interesting but requires quite some further development and improved experimentation to prove its merit. I would suggest provisional acceptance in this case. The authors do not mention from the outset that their method requires (in this case) organ segmentations to guide the registration process. This is a key fact and should be mentioned from the abstract onwards. The experiments carried out are minimal and constitute little more than a ""proof of concept"" as stated by the authors themselves. The method is compared with an alternative which does not make any use of organ segmentations to achieve the registration results. Any gain in accuracy should be considered in this context. """,3,1 midl19_51_1,"""This paper proposes to train a nuclei segmentation network using pixel-level labels generated from points annotation by Voronoi diagram and k-means clustering. A dense CRF is trained on top of the network in an end-to-end manner to refine the segmentation model. The authors evaluate their methods on two datasets, Lung Cancer dataset (40 images/8 cases) and MultiOrgan dataset (30 images/7 organs). This paper is well-organized and easy to follow. The topic of learning from weakly annotated histology images is highly relevant for the community. - The proposed method uses (dilated) Voronoi edges as the 'background' label, which effectively depends on two assumptions: 1) the neighbouring nuclei have similar size and are non-toughing, 2) the point annotation is located at the centre of each nuclear. I feel these are also limitations of the proposed method. Simply saying 'point annotation' is therefore inaccurate and a bit of misleading here. In fact, the proposed method only allows one point per nuclear, and in the experiments the manual point annotations are simulated by computing the central point from the ground truth full masks. - Method of 'Full' + CRF should be included in the analysis, to show the possible 'upper bounds' of the segmentation performance. - The proposed training label extraction methods should be compared with the those evaluated by Kost et al. (2017), Training nuclei detection algorithms with simple annotations.""",2,1 midl19_51_2,"""The paper presents a method to perform nuclei segmentation based on point annotations. The authors evaluate weakly supervised methods (with and without CRF loss) on two nuclei segmentation datasets and compare the performance with fully supervised and other state-of-the-art methods. The paper is well organized and clear, has a well-defined objective and shows experimental evaluation using segmentation metrics e.g. F1 score, Dice coefficient, AJI. The outcomes of the analysis are promising, as the segmentation achieved using weakly supervised learning is comparable to fully supervised counterparts and other investigated methods. The following concerns need to be addressed by the authors: -In initial stages, Voronoi diagrams extract the rough positions of cells and k-means clustering extracts the rough boundaries. From the k-means results, it seems the results do not provide strong priors for accurately segmenting the nuclei boundaries. Both Voronoi centers and clusters appear to be weak shape descriptors as boundary information is not fully preserved. The authors may want to explain the limitations of this type of annotation and discuss why the AJI values are observed lower as compared to other evaluation metrics. Why is the highest AJI achieved for Weak/Voronoi method? -How is the 'ignored class' represented in the training set (0/1)? It is shown in Fig 2 but not in the results in Fig 3 and Fig 4. -The caption of Fig 1 could be made more clear to indicate the figure contents. -In Table 1, the best results could be highlighted for better readability. """,3,1 midl19_51_3,"""This paper attempt to do nuclei segmentation in a weakly supervised fashion, using point annotations. The paper is very well written and easy to follow ; figure 1 does an excellent job at summarizing the method. The idea is to generate two labels maps from the points: a Voronoi partitioning for the first one, and a clustering between foreground, background and neutral classes for the second. Those maps are used for training with a partial cross-entropy. The trained network is then fine tuned with a direct CRF loss, as in Tang et al. Evaluation is performed on two datasets in several configurations (with and without CRF loss, and variation on the labels used) ; showing the effects of the different parts of the method. The best combination (both labels + CRF) are close or on par with full supervision. The authors also compare the annotation time between points, bounding boxes and full supervision, which really highlight the impact of their method (x10 speedup). Few questions: - Since the method is quite simple and elegant, I expect it could be adapted to other tasks. Do you have any ideas in mind ? - How resilient is the method to ""forgotten"" nuclei ; i.e. nucleus without a point in the labels ? Could it be extended to work with only a fraction of the nuclei annotated ? - Is using a pre-trained network really helping ? Since there is so much dissimilarity between ImageNet and the target domains, I expect it to be mostly a glorified edge detector. It is improving the final performances, speeding up convergence, both ? Minor improvements for the camera ready version, in no particular order: Tang et al. 2018 was actually published at ECCV 2018, the bibliographic entry should be updated. Section 2.3 should make the differences (if any) with Tang et al. explicit. Those three papers should be included in the state-of-the-art section: - Constrained convolutional neural networks for weakly supervised segmentation, Pathak et al., ICCV 2015 - DeepCut: Object Segmentation from Bounding Box Annotations using Convolutional Neural Networks, Rajchl et al., TMI, 2016 - Constrained-CNN losses for weakly supervised segmentation, Kervadec et al., MIDL 2018 Since the AJI and object-level Dice are not standard and introduced in other papers, it would be easier to put their formulation back in the paper, so the reader does not have to go look for it. Replacing (a), (b), ... by Image, ground truth, ... in figures 2, 3, and 4 would improve readability. """,4,1 midl19_52_1,"""The paper proposed an interesting approach to handle the label conflicts between different brain datasets, via using a new loss function modified from the cross-entropy. The paper is well-written and well-organized. The novelty of the paper is limited. The major contribution is a loss function, and the rest of paper adapted established works. And the overall performance is not convincing. In the definition of the proposed loss function, it is not clear that the sum is inside of log function. Why not placing the sum outside the log function? Please further explain the loss function theoretically. There might be issues with multi-class cross entropy function. Because the objects like tumor are very small, then the loss might be biased during training with unbalanced sampling. It is better to use weights for either cross-entropy or the proposed loss function. The experimental results did not show significant improvement using the proposed loss function over the multi-UNet. Also, it would be nice to compare with the state-of-the-art brain tumor segmentation methods (e.g. Myronenko, A., 2018. 3D MRI brain tumor segmentation using autoencoder regularization. arXiv preprint arXiv:1810.11654) in the experiments. What would be the outcome when considering all the datasets (brain tissue, WMH, and brain tumor) as one scenario? Please compare with the performance using other segmentation networks to justify the advantage introduced by the new loss function. Thanks for the response from the authors! Although the idea of the new loss function in the paper is interesting, the experimental results and explanation were not convincing. One major reason is that the baseline approaches were not strong enough to valid the proposed approach. Therefore, the original decision remains unchanged.""",2,1 midl19_52_2,"""This work deals with the problem of label contradiction in joint learning from multiple datasets with different labelsets on the same anatomy. The proposed solution is an adaptation of the cross-entropy loss, where the voxels with contradictory labels in different datasets are treated differently than usual. Overall, the paper is well-written, the application is well-motivated and the contribution is novel to the best of my knowledge. In my opinion, there are no major flaws in the paper. Having said this, here are a few things that I think could further improve it: 1. What happens if the 'voxels that are not lesion background' are not penalized at all? I think it is important to compare this form of 'naive' adaptive loss to the proposed method. 2. The quantitative results for the brain tissue + WMH experiment with the naive dice loss seem to be at par with the multi-unet and the ACE, but the qualitative results are considerably worse. Is the case shown in the qualitative results an outlier for this setup? 3. After first reading the problem statement (Sec. 2), I was a little unclear as to what is exactly meant by union of all labelsets. Perhaps a sentence to clarify this (saying that the union refers to a set where the background label is over-written if it is foreground in the other dataset) might be helpful. 4. In Sec. 2.1, it is said that the mini-batches are sampled with equal probability from all datasets and all classes. As voxel-wise labels are predicted, how is it ensured that the mini-batches contain, on average, an equal number of voxels from each class? 5. In the text following Eq. 4, consider using 'labelled as anything but lesion background' instead of 'non-lesion background'. In my opinion, this would be clearer. 6. Depicting the quantitative results in a table instead of the box plot might be better to appreciate the differences between the various methods. 7. If possible, consider moving the qualitative results into the main text instead of the appendix. 8. Finally, I am not sure if it is necessary to give the multi-unet benchmark the advantage of additional modalities as described in Appendix C, although this only strengthens the benchmark. Providing all experiments with the same inputs would be a cleaner setup and would help focus entirely on the label contradiction issue. 9. practise --> practice.""",3,1 midl19_52_3,"""The idea is simple, but interesting. The proposed loss function has the potential to train models across multiple datasets that complement to each other. The loss function addresses correctly the problem when multiple labels are distributed across several datasets and they need each other to produce a more complete output. A single model that can perform multiple task is preferable because it is more robust to image variations and has a lower computational load. The problem statement is clear and the process to solve it through the experiments is also clear. For the regions such as WMH, EDEMA, and tumor, the proposed method achieved worse results than MultiUNet. it would be good to add an analysis of this part. The proposed method requires the same data modality availability across several datasets which reduces the model input images to just one or two. This may represent a major drawback if the datasets depend on different types of image modalities. However, this problem may be outside the scope of this work. """,3,1 midl19_53_1,"""The study looked into the utility of using of CNNs for semantic segmentation of histopathological images in colorectal cancer, with a focus on settings where training labels are available in the form of sparse manual annotations. Overall, the paper is well written, nicely structured, its methodology is well designed, and observations are clearly described. The problem it aims to address is important, and has potential applications beyond histopathological images and colorectal cancer. Some areas for improvement are included below. *Abstract: ---------------- - We propose to address this problem by modifying the loss function in order to balance the contribution of each pixel of the input data... This statement gives an initial impression to the reader that a major focus of the study is the modification of the loss function, where in fact this refers to an empirical evaluation of different strategies of assigning weights to pixel samples (which, in turn, contributes to the loss function). Please edit. *Introduction : ---------------------- -The authors build a good case for why the use of sparse labels is interesting. Reference to previous work in the literature is, however, rather limited. *Materials : ----------------- - It would be good to mention details on variation within the colorectral cancer patient cohort analysed here, e.g. Does the number of images reported corresponding to one image per patient? how many malignant vs. benign cases? Age and sex distributions? etc. - What is the experience of the annotators with this kind of data? How familiar are the non-pathologists with medical images? - Please clarify if there is any overlap between the images on which sparse and dense annotations were carried out? (i.e. are some of the sparsely annotated images essentially a sub-set of the densely annotated ones?) - Please confirm that there are actually two training sets, two validation sets, and two test sets, for sparse and dense data respectively? *Method : --------------- - The subheadings are confusing in this section. Perhaps some use of numbering can help organise related subheadings together. *Results, Discussion and Conclusion: ------------------------------------------------------ - One needs to be careful here before interpreting and drawing solid conclusions from the reported results. There are no confidence intervals associated with the reported Dice scores and it is difficult to really evaluate the levels of overlap between different models performances. Since model robustness is not evaluated, and given that sampling effects may play a big role here, there might be different observations if the same analysis was carried out on a perturbed version of the dataset. These points need to be highlighted in the discussion. - Dice scores alone may not give a sufficient idea into how the models perform. Please comment on the need for additional metrics e.g. accuracy, sensitivity, specificity. - Please discuss limitations and what the next steps are for this research. """,4,1 midl19_53_2,"""The authors proposed the use of sparse annotation data to train a U-net architecture for semantic segmentation of histopathology images. This is an interesting problem, since the training of deep neural networks usually requires a large amount of labeled images and the process of labelling is very time consuming. The main problem of this paper is the weakness of the experiments. The method is validated only on 5 test images. In my opinion, this is also the reason of the instability of the results of Table 2 when the two different types of balancing strategies are compared. It follows my detailed comments: -- In the introduction, there should be more references to other segmentation approaches that use sparsely annotated data. -- Why the percentage of pixels for each class change between the sparse and the dense annotations in Table 1? Why for some classes it increases and for some decreases? I think for a fair comparison between dense and sparse annotations the authors should keep these ratios more similar. -- Why the authors didn't apply a cross-validation analysis? This would have validated their method in a more robust way. -- It is not clear if the authors use the standard or modified U-net. What does it mean a 5 deep layer U-net? A figure of the network architecture could help the reader -- The authors should also show the results obtained with the dense annotated data and the two unbalancing strategies, at least for the mini-batch balancing. Although its true that using dense annotations could solve the instance balancing problem, I don't understand why also the problem of mini-batch unbalancing is solved. -- A comparison with at least the method presented in [Xu et al., 2014] should be performed. """,2,1 midl19_53_3,"""- The authors propose dedicated loss functions to train CNNs for segmentation tasks based on sparsely annotated data. - Clearly described methods which could be applied in a wide range of tasks as manual annotations are always difficult to obtain in medical imaging. - Comparison with densely annotated images shows comparable results. - Overall there seems to be a slight improvement over just using a mask of the annotated pixels, but this improvement is not clear for all segmentation classes. In figure 3 it can also be clearly observed that the improvements are very different for different tissues, which makes it difficult to evaluate the approach.""",3,1 midl19_54_1,"""This paper applies a densenet with less number of parameters for segmenting the infant brain at isointense stage. 1. This paper is easy to understand and well formulated. 2. The proposed method is validated on a public dataset (iseg). 3. The proposed method achieves very good results. However, it needs to be improved in the following aspects: 1. Though the authors claim they use less number of parameters, I cannot see the strategies to make it (using more hyper connection is actually quite trivial and cannot be used as novelty in my understanding; excluding label can indeed reduce the number of parameters? I doubt it). The authors should better list the number of parameters of all the compared networks, and the number of parameters if using and not using the proposed training strategy. Then we can see what happens. 2. In addition, the number of parameters cannot represent how hard the network to be trained. Since we are not sure the freedom degrees of the network. Of course, this is only my personal understanding. 3. I cannot learn much from this paper. The authors can point out what's the contributions. 4. For the experimental part, I'd like to see some ablation study to validate whether the proposed training strategy indeed works or the good performance is coming from excellent hyper-parameter tuning. 5. ""The GM labels were concluded from the compliment of the already predicted CSF and WM labels."" This one is useful, but not actually new, I have read papers to use similar strategies to conduct infant segmentation to segment CSF vs. WM+GM and then WM vs. GM before. More importantly, the proposed strategy is not quite general, you can do the compliment to get the GM because the background for brain MRI is usually 0. In other applications, the background is usually not 0. If the authors solves the above concerns, I'll choose to accept it. """,3,1 midl19_54_2,"""The paper presents a dense 3D-FCNN for segmenting multi-modal infant brain MRI. The main contribution is a modified training strategy which optimizes the prediction of tissue classes separately with sigmoids, instead employing a traditional softmax function. This enables a finer control of the precision-recall tradeoff, via a custom F-beta loss. - The paper proposes a somewhat novel strategy for dealing with high-overlapping classes, for which the trade-off between precision and recall for each class has an significant impact on performance. - Authors report state-of-art performance of the challenging iSeg dataset, where different class regions exhibit low contrast. Statistically significant improvement is also obtained compared to the traditional single-label (i.e., softmax) approach. - The paper is well written and easy to follow. In particular, authors did a good job motivating the problem and their proposed method. The method and experiments are clearly described and could be reproduced fairly easily. - Contributions w.r.t. to existing work are not entirely clear. The proposed loss is similar in terms goal to the Generalized Dice Loss (Sudre et al., 2017 -- see bottom ref.), where the precision/recall importance of each class is weighted by its size. Moreover, the strategy of processing 3D images in separate patches (both in training and testing) is actually implemented in several segmentation methods, for instance DeepMedic and HyperDenseNet. In fact, this random region crop strategy is fairly standard when training deep segmentation networks from large images. - The proposed method seems to be tailored to this specific dataset (iSeg), i.e., three classes, two of them having a large overlap. A stronger validation could have been achieved by testing the proposed method on other brain MRI segmentation datasets (e.g., MRBrains), or problems where class imbalance is more pronounced (e.g., brain legion segmentation). Minor comments: - The proposed architecture merges modalities in the first layer, however recent studies have shown that later fusion could lead to better performance (e.g., Dolz et al., 2018). Perhaps authors could motivate this architecture choice. - ""calculating sigmoid is less computationally cumbersome for a processing unit compared to softmax especially for large number of labels."": I doubt this makes a real difference in computation time. - ""... our 3D FC-DenseNet architecture which is deeper than previous DenseNets with more skip-layer connections and less number of parameters"": This is a bit misleading. For instance, HyperDenseNet introduces skip connections across all layers and all paths (one per modality), therefore has a maximum number of skip connections for a equivalent number of layers. Carole H Sudre, Wenqi Li, Tom Vercauteren, Sebastien Ourselin, and M Jorge Cardoso. Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, pages 240248. Springer, 2017.""",3,1 midl19_54_3,"""Nice novel fCNN DenseNet strategy to improve segmentations in data with heterogenous appearance and features, such as in infant MRI data at iso-intense stage (images that are particularly difficult to segment). The strategy employs a novel multi-label, multi class classification layer, a novel similarity loss function (that allwos balancuing of precision vs recall) and patch prediction fusion This reduces complexity and increases speed. New methods achieves highest performance in challenge dataset, with surprisingly high dice accuracy and slower surface distances. Novelty is somewhat iterative. While computational efficiency/speed is improved, that's not really an issue for this type of segmentation """,3,1 midl19_55_1,"""- This paper presents a GAN-based network which can transform surgical instruments while preserve the background information, for enlarging the training database and alleviating data imbalance problem. - Experiments have been conducted to verify the effectiveness of proposed framework. - The paper is well written and easy to follow. - The main weakness of the paper is the lack of novelty in the methodology. The proposed methods for improving network are quite common and well-established techniques, such as domain adversarial loss, self attention and cycle loss. The novelty perhaps lies in this particular way of using the elements together and not on the elements themselves. - One concern of this paper is the question formulation: transforming the instruments while preserving the background to enlarge database. According to the regulation of surgery procedure, surgeons are requested to perform specified operations with corresponding sets of instruments for different surgery phases. Therefore, some information of background is important for the instrument recognition. While, this method may lead to some abnormal scenes which cannot happen in the real surgery and therefore may confuse the recognition network learning. For example, if the transformation of the cadiere to bipolar happens in the packaging stage, it is unreasonable since the bipolar hardly appears in that stage. It is interesting to see how to deal with this issue. """,2,1 midl19_55_2,"""- This paper presents a method for the instrument recognition task from laparoscopic images, using two generators and two discriminators to generate images which are then presented to the network to classify surgical gestures. - The method introduces a self attention mechanism using weakly supervised labels, thereby avoiding the need to use more exhaustive annotations such as segmentations. This is an important advantage for leveraging hundreds of recorded cases without having available segmentations. - Overall a clearly written paper, with nice visual results. - Mainly an incremental paper, proposing a combination of well established GAN-based networks to accomplish a classification task. The different loss functions are all based on previously proposed approaches and exploited in this case for this dual background/foreground problem. - The presented evaluation is limited, with training done on only 8 datasets, which in this particular case is a limitation due to the importance of presenting the networks with different backgrounds from various surgical sites and perspectives during surgery. Indeed the critical factor is not to capture the instrument's appearance but rather model how variable the anatomical environment is. A more complete evaluation with different surgical scenarios would be needed to demonstrate this feature. - Quantitative assessment is fairly limited, and yielding underwhelming results compared to individual networks (ex. CycleGAN). It would be interesting to have the author's point of you on the less than optimal results, and how they plan to improve it.""",2,1 midl19_55_3,"""- Introduction of a new GAN-based approach, named DavinciGAN, to address imbalance problem in surgical instrument recognition - Incorporation of background consistency loss using self-attention mechanism to encourage the transformation of only the candidate tool to the target tool while maintaining background - A comparison of the proposed approach with other STOA approaches - A discussion of self attention via weekly supervised learning and the effectiveness of background consistence loss - The presentation of the methodology can be further improved. - Figure 2 is difficult to understand and a better illustration is required - {x_i}i=1, 2, 3, 4 and {y_i}i=1, 2, 3, 4 are only illustrated in Figure 2 but never used in any equation of the main text, which makes the description of the methodology difficult to read.""",3,1 midl19_56_1,"""The authors propose a method to correct k-space of cardiac MR images affected by motion artefacts. The method employs a previously proposed deep learning method (Automap) and extend it with a GAN. The Automap is trained to map K-space images to an output image. By introducing synthetically created motion artefacts by modifying K-space, Automap is trained to generate original images from corrupted K-space images. Evaluation is performed by automatic segmentation of images after artefact correction. The paper is clearly written. The authors provide extensive experiments, comparing their method with other artefact correction methods. Figure 3 does not provide clear evidence that the proposed method corrects motion artefacts in k-space images in the wild. Both segmentation results are very similar and have a major error in left ventricle myocardium segmentation. It seems that the proposed method only seems to correct synthetically created motion artefacts. It is unclear if the method would reconstruct K-space images correctly if motion artefacts are absent. What is the influence on segmentation performance applying artefact correction on k-space images without motion artefacts? Only average scores are provided. It would be interesting to see boxplots (including visualization of outliers). How does the Dice of 0.91 on original images provide a clarification for higher scores in Table 1? It is unclear how activity regularization was performed Figure 2 is hard to assess because input images are absent. """,3,1 midl19_56_2,"""Summary: In this work the authors have described a deep network (Automap-GAN)-based k-space artifact correction algorithm that improves image quality, which leads to improved segmentation accuracy. * Demonstrates useful results of motion artifact correction for improving segmentation of synthetic and real data. * Interesting network design (though there appear to be significant errors in the description) * The network architecture is rather hastily described and not clear to me. The discriminator architecture description lacks details such as number of filters, filter-size, final layer activation function etc. A picture will help. * The network architecture has been called an Automap-GAN but there is no mention of a discriminator loss in the eventual loss function. The training of GANs is tricky to control depending on how many iterations are the generator and the discriminator trained before updates, the details of which are missing. * What if the adversarial loss (if it exists) is removed? * What was the k-space dimension size pseudo-formula that was eventually used? * Not sure if the novelty claim of using SSIM and MSE as loss functions is true. These are available in tensorflow and are used frequently for image synthesis tasks. * The details of the activity regularizer are not mentioned. * I assume the training and test subject sets were non-overlapping? The description in terms of the 2D images does not make this clear. * Standard deviations for the metrics in Table 1 and Table 2 will give a better idea of the improvement in performance. Minor: Typo in ""prerequisite"" caption of Fig 1. """,3,1 midl19_56_3,"""This work builds on Automap-GAN, a framework previously proposed by the authors to directly reconstruct good quality images from corrupted k-space acquisitions. They had proven this framework to be able to remove motion artifacts in cardiac magnetic resonance (CMR) imaging. They had measured this improvement both qualitatively and quantitatively using MSE loss in image space, using artificially corrupted data. They noted that the MSE loss may not be the best loss to train with and evaluate results. For training, they introduce here an additional SSIM loss. This loss may be able to reduce the blurring effect of reconstruction using only the MSE loss. To evaluate results, the MSE of the reconstruction is replaced by the evalutaion of the improvement for an important downstream task, i.e. semantic segmentation quality, measured by classical metrics (Dice, Hausdorff distance). This consists the main originality of the paper. Pros : Tackles a difficult problem, i.e. images with possibly big motion artifacts, which are typically excluded from medical imaging datasets. The paper investigates the interesting influence of artefact correction on segmentation quality. The proposed method is compared against with a variety other standard reconstruction methods (4 in total). The method is consistently better than all the others as measured by 3 segmentation metrics. The paper is clear and easy to follow. The SSIM loss was introduced after smoothed-out and blurred looking reconstruction images were observed in previous work by authors. Here though, there is no analysis of how the additional SSIM loss improves this matter: -No presentation of improvement in reconstruction metrics (i.e. MSE which could be calculated for the artificially corrupted data), if there was some ? -In fact, the reconstructed images of the proposed method shown in Fig 1 and Fig. 3 still seem blurry (more so than with WIN5). The comment the proposed method corrects the artefact but loses some structural information from previous paper still seems to hold. No qualitative comparison of reconstructed images with and without SSIM loss is provided. -Improved segmentation with no improvement of image quality might not be well accepted in practice by clinicians, so it may be better that both be demonstrated. This is actually what is anticipated for future work by others. The SSIM calculation is not clear. What are the regions x and y in this case ? Are they parts of the images around a pixel location p ? Or the whole images (consistently with the notations a few lines above) ? Is Lssim the same for all pixel, otherwise should it be averaged on the whole image - like Lmse ? Could this be better explained ? A limitation of the method is the memory burden for motion correction, as acknowledged by authors. Note: it might be judicious to drop the few lines defining Dice and Hausdorff distance, which are well-known, and use this space to spend a few more lines explaining the adversarial setting, which isn't so obvious to understand. """,3,1 midl19_57_1,"""This paper proposed an innovative deep learning architecture fusing supervised and unsupervised model to improve white matter lesion segmentation performance. Unsupervised Anomaly Detection was used to provides optimization targets for supervised segmentation model from unlabeled data. The experiment results showed the feasibility in a semi-supervised setting and an unsupervised setting. The motivation and methodology parts were clearly written. The paper is easy to follow. The essential details of the AE and segmentation network architecture are missing. Dataset seems too small without knowing the details of the network. """,3,1 midl19_57_2,"""1. The contribution of the paper is that it proposes a framework to perform semi-supervised learning for white matter lesion segmentation. Essentially, it is a self-training method, where the unlabelled images are first segmented using a thresholding method and then fed to a segmentation network for training. 2. The method is solidly evaluated, demonstrating its performance on a few different datasets and for domain adaptation problems. 1. The basic assumption to perform segmentation using the difference map between the input image and the auto-encoder (AE) reconstruction is that the difference is only attributed to lesions. I am not quite sure whether this assumption always holds. For example, if the AE is able to reconstruct brain images with lesions as well, then the segmentations of the unlabelled images would not be available. In addition, if the difference is not caused by lesions, but instead by other structural abnormalities, then the lesion segmentations would be wrong. 2. In the introduction, the paper mentions that one major challenge in lesion segmentation is the wide variety of lesion appearances. Maybe it would be helpful to show that the AE segmentation method can cope with different lesion patterns. 3. Two thresholds or operation points (OPs) are used for segmentation. It is understandable for the segmenting the unlabelled images, the first threshold may be required. However, for training the segmentation network, why is the second threshold needed? 4. It would be easier to read Tables 2,3,4, if an additional column could be added to show the number of training samples (#labelled and #unlabeled) for each method.""",3,1 midl19_57_3,"""This paper uses unsupervised anomaly detection to create artificial labels . A spatial autoencoder is trained on healthy data to learn the appearance of normal data, using an optimization loss composed of L1-reconstruction + L2-reconstruction + Gradient-Difference-Loss. During inference time, the residual between the input and the reconstruction provides a anomaly detection mask on pixel-level. These artificial labels are then used as (additional) labels during training for segmentation models. Three main results are provided: (1) In the single domain-setting, when a segmentation model is trained on the artifical labels alone, the segmentation performance increases compared to the unsupervised anomaly detection model; (2) In the single domain-setting, when used as additional labels in a single-domain setting, performance decreases; (3) When used for domain-adaptation (manual annotations in the source domain, artificial labels from the unsupervised anomaly detection approach in the target domain), the segmentation in the target domain increases significantly. I accept the paper based on the assumption that in the revised version of the manuscript, my points stated below in the ""cons"" section will be addressed. Pros: - The paper presents an interesting idea - The work tackles a highly relevant problem - Results look promising and are interesting. In particular the domain adaptation results, but also the improvement of unsupervised anomaly detection when supervised segmentation model is re-trained on the unsupervisedly created labels is worth to be published. - Introduction is written in a clear way and contains an extensive discussion of related work. - Method section is very clear and easy to follow. - The conducted experiments are reasonable and provide an appropriate insight regarding the presented method. (but structure/presentation of experiments needs to be improved and minor evaluations should be added, see cons) Cons: - In Section 1, the authors cite (Pawlowski et al., 2018) and state that they used model uncertainty in Variational Auto-Encoders to detect anomalies. This is imprecise, as Pawlowski et al. proposed to average multiple auto-encoder reconstruction outputs (generated by Monte-Carlo dropout sampling) and do not use uncertainty in their model. - In Section 2 (Methodology), in equation (1), the weighting-terms of l1/l2/gdl are missing (lambda_l1/lambda_l2/lambda_gdl). In the corresponding experimental setup description, only the value of lambda_gdl is mentioned. Please also provide weighting values for the other two terms. - In Section 2, the authors mention that many false positive residuals are avoided by optimizing for the gdl term, but this is a statement without proof. I kindly ask the authors to evaluate the performance of a comparison model where the auto-encoder is trained without the gdl-term. - I think Section 3 needs to be revised in order to present the results in a more clear and better structured way. In its current form, it is quite hard for the reader to follow the evaluation and compare different models. Furthermore, i some informations are missing. Some issues/suggestions (not necessarily complete): - provide values for lambda_l1/lambda_l2 of equaiton (1) - The datasets have different images: ""MSSEG2008"" datasets contain FLAIR + T1 + T2-weighted images; ""healthy""+""MS"" dataset contain only FLAIR and T1 weighted images. How is this handled in the semi-supervised domain-adaption experiment? - Figure 2 is not described in the text (i would have expected that in Section 3.2). I guess it is a result on the MS-unlabeled dataset (or is it on the MS-validation set?), but please provide that information. the model is referred to as ""AE"" (also in 3.2), while in Table 2 it is referenced as ""UAE"" - be consistent. - In Section 3.2, the authors state that ""all non-empty slices of the 19 unlabeled subjects have been processed with the AE to detect anomalies"". How are non-empty slices determined? - The title of Section 3.3 does not mention supervised deep learning, though a fully supervised experiment is conducted as well. Maybe a different subsection-title would help? - In Section 3.3, the term ""lower bound"" is confusing. A supervised trained model is compared against unsupervised and semi-supervised models and is the lower bound? As the results show, this is also not true. I would recommend to use completely different notation/naming of the compared models. For instance, the authors could use the used labels for notation (instead of ""lower"", ""SS"", etc. --> ""Y_MS"", ""Y_MS + S_MS"", etc.). This should then also be reflected in the subscript-notation of X/Y/S: (X_MS,Y_MS) denotes the pair image+ground-truth in the ""MS"" dataset; (X_MS,S_MS) denotes the pair image+artificial-label in the ""MS"" dataset; etc.. - Change the name of the ""unlabeled"" subset in D_MS, since labels of this data are used in the experiments. Maybe ""Addtional Training Set""? - Use a shorter subscript for D_MSSEG2008-CHB as long as there is no specific reason for this long version. - It could be useful to introduce the X/Y/S notation in table 1, but once the authors revise this Section they will see if this is a useful suggestion or not. - Introduce the experiment pipeline and its target directly at the beginning of the subsections. Especially in Section 3.4 this would help to clarify. - Introduce an additional line in Table 2, showing the UAD-performance of an autoencoder trained without the gdl-term, as mentioned above. - The UAD model is not explicitly described in the experimental setup. The reader can derive it from the context somehow, but this is unneccessarily difficult. - Section 3.4: again the notation of the models is confusing. In consistency with the above suggestions, i would recommend to use labels/training-data info to describe the different models. - Section 3.4: Abbreviation in Table 3 and 4 is not consistent with the abbreviations used everywhere else: ""MSSEG CHB"" vs. ""MSSEG2008-CHB"" - What is the difference between ""DICE"" and ""DICE(mean+standardDeviation)""? What is ""AUPR"" (i guess it is area under the precision recall curve, but this should be mentioned)? - In the discussion the authors mention ""that AU S is slightly inferior, which is due to FPs in segmentations (see Figure 2 C) provided by the UAD for training the segmentation network. The FP in S are learned and again reflected by the segmentation model."" The authors should weaken this statement, if they just observed many false positives in the output of the autoencoder in a qualitative assessment. At least they should mention which evaluation was performed that leaded to this conclusion. - The authors state ""We even outperform the upper bound model B_upper which has been trained from labeled data of both domains."" Be clear about which dataset and model are you referring to (i guess itsMSSEG2008-CHB, but please state it). In the next sentence the authors claim that ""The same effect is noticed in the experiments involving the D_MSSEG2008-UNC dataset,..."", which is not correct. The model B_SS does not outperform the upper model on the MSSEG2008-UNC dataset. Please revise this. If the authors mean something different here, please correct/clarify that in the text. """,3,1 midl19_58_1,"""The manuscript describes semantic segmentation in microscopic images. The work consists of comparative evaluation of semantic segmentation methods using three state-of-the-art convolutional neural networks, namely U-Net, Tiramisu and Deeplabv3+. The manuscript is well organised, clearly written and has good motivation. The work mentions a custom U-Net inspired by original U-Net, however, the design process and differences from latter are not clear from the description. State-of-the-art methods are compared for semantic segmentation in microscopic images. A custom U-Net is applied but not clearly discussed. The application area is interesting but the experiments are performed on limited datasets and cross-validation is only performed for one method. One evaluation metric (dice) is used, based on which the performance evaluation is not conclusive. The following problems need to be addressed by the authors. 1. The authors propose a custom U-Net. They could specify why this architecture was used and how is it superior to the original U-Net? What was the quantitative difference in their performance? 2. The comparative evaluation seems incomplete. Why was only one method cross-validated? The custom U-Net was not compared to the original U-Net. Other evaluation metrics could be used as the current evaluation isn't conclusive. The standard deviation of cross-validations could also be specified. 3.The dataset described is limited to 170 images but the deep architectures require learning from large-scale datasets. The authors mention augmentation, but the size of augmented data is not specified. Though the authors mention that learning curves did not show signs of overfitting; an example of such a curve could be illustrated and discussed. """,2,0 midl19_58_2,"""The authors claim that it is possible to obtain equivalent information to the one given by fluorescence microscopy (images where nuclei and cytoplasms are labelled with different fluorochromes and therefore, it is possible to distinguish almost uniquely both parts of the cells) with bright-field microscopy data. The problem at hands is very interesting for the improvement of biological results as fluorochromes are sometimes toxic and might change the metabolism and behaviour of cells. Besides, each of the trained convolutional neural networks (U-net, Tiramisu and Deeplabv3+) results in a high Dice coefficient (0.91, 0.93 and 0.94 respectively), showing the great potential of the employed methods. The methods used in this paper are already published and widely discussed convolutional neural network architectures (U-net, Tiramisu, Deeplabv+3), that show to work very well in this case. However, the writing style is so messy that it does not make clear the process followed by the authors to obtain the presented results. There is also a large number of format errors and the use of English should be reviewed: - Author names and institutions are missed! - Some references are missed along the text and appear as #(in the first paragraph for instance) - Format errors in the bibliography: CVPR is written as Cvpr, Computer Vision and Pattern Recognition Workshops (CVPRW), Proceedings of the IEEE conference on computer vision and pattern recognition - yielding different label images. yielding different images FOR EACH LABEL - with some of blocks some of THE blocks - The encoding phase consist consistS - without resampling and to, patches ??? and so on. In terms of clarity, I would highlight the following points: - The abstract does not reflect clearly what is the main motivation of this work and what is exactly the concept the authors want to prove: The use of image processing methods for the prediction of cell nuclei and cytoplasms from different (less toxic, and less expensive in terms of work) microscopy modality images, such as bright field microscopy. - The process to build the ground truth is not explained. - The software used for the implementation of the networks is not specified. - The data for cross-validated training of Deeplabv+3 was split into 136 and 44 images, while authors only had 170 images. Therefore, some of the images must be included in both training and validation datasets. Might this be a reason why the reported accuracy measure in Table 1 is higher than the one for Deeplabv+3 without cross validation? - There are some questions that should be detailed along the manuscript: Why did you decide to use cross-validation only for Deeplabv+3? Equation 1: What is the value range of k? - Font size in Figure 1 is too small. The results show that it might be possible to segment cell nuclei and cytoplasms from brightfield microscopy. In order to prove it, I would say that the data should be more heterogeneous: different cell lines and microscopy devices. Air bubbles are quite common in brightfield microscopy. Also, a previous step to remove this part of the images (or the whole image) might slow down the process or even introduce some bias in the cases in which discarding of air bubbles is not correct. Do you think that a machine learning method could learn to discard the pixels belonging to air bubbles and classify them as background for example? How would you evaluate it (in fluorescence microscopy bubbles are not a problem as the fluorescent signal is recorded in any case, and therefore in the ground truth these pixels will not appear as background)? """,2,0 midl19_58_3,"""This paper proposed a method for labeling bright-field images using three types of convolutional neural networks, i.e., U-Net, Tiramisu model, and Deeplabv+3 model. The experiments were performed on 170 2D images. 1. The main concern is the novelty of the proposed method, since the authors simply tested three types of CNN for segmentation and no new methodology is proposed here. Why using these three networks and not the others? 2. Besides, the paper is hard to follow. For instance, in the experimental setting, there are two similar sentences: The data was split into 153 training images and 17 validation images and The data was randomly split into 136 training images and 44 for validation. From the perspective of a reader, it is not clear at all. 3. The same issue occurs in Table 1 and Fig. 2, where no clear explanation to present the difference between the method Deeplabv3+ and Deeplabv3+ (cross validation).""",2,0 midl19_59_1,"""- This study is well-motivated: reconstructing MR maps from MRF fast scans provide a useful tool for wide-range of medical applications. - Clarity: the text is very clear and understandable with enough necessary technical details. - Limited contribution with respect to the previous work: this manuscript offers limited methodological contribution with respect to an already published paper from the same authors (Balsinger et al., 2018). The extra contributions of this manuscript are 1) application on a larger dataset (from 6 to 95); 2) a 'yet' new architecture; and 3) extra analysis on the size of receptive fields and temporal frame importance. While the first extra contribution is very valuable, the second and the third does not add much due to technical soundness, sub-optimal experimental design, and partial reporting of results (see the followings). - This study suggests a new CNN architecture for patch-wise MR reconstruction from spatio-temporal MRFs using the under-experiment data with limited size. This has recently become a common practice, and an unfortunate pitfall, especially in the medical imaging context due to ""Architecture-Data Bias"" problem in comparison with other existing methods. In other words, the authors tend to compare their already very-fine tuned architecture (with a massive number of parameters and hyperparameters) -on a rather small benchmarked data- with existing architectures that are again fine-tuned with ""other"" datasets. Without any surprise every time we try this we will come up with better results from every respect compared to competing approaches (same as table 2). There is a pressing need in the community to raise awareness about this problem. The author might convince me to some degree to change my mind about the presented results if they also present the results for the proposed architecture in Balsinger et al., 2018 on this dataset. - The authors proposed to concatenate the real and imaginary parts of input in the time dimension. This choice seems to oppose the natural spatio-temporal structure of data and blurs the temporal frame importance analysis. I would suggest to consider the real/imaginary features as a new dimension in data and use 3D-CNN. Analyzing the importance of real and imaginary parts in the reconstruction process would also be a nice research question. - Analyzing the influence of receptive field is very interesting but some arbitrary choice made by the authors in the analysis and reporting results put the results under a question. For example, the authors opted to only report the results for T1_H2O and not for FF and B. Another example is to use 3 3 convolution for 13 13 receptive field! Why? I would also like to see results for bigger receptive fields e.g., 20 20. - A minor comment is the quality of Figure 2. The left-panel is not informative as it shows similar reconstruction across different methods. The numbers in the colorbars are so tiny and not readable.""",3,1 midl19_59_2,""" This paper proposes to use a CNN architecture to reconstruct MR Fingerprinting parametric maps. The authors test their algorithm on a dataset of 95 subjects for neuromuscular disease. They compare their method with two state of the art deep learning methods and illustrate superior performance on NRMSE, PSNR, SSIM and R2 metrics. Moreover, they have done some ablation studies to show the importance of the receptive field and temporal frames for MRF reconstruction. I believe the experiments are thorough and well designed to back the claims of the paper. The utilized network architecture can be better explained with an emphasis on specific design choices. 1- This paper is well written and the message is clear to the reader. 2- The extensive tests on a real dataset instead of phantom cases is definitely a strength of the paper. 3- The description of the network architecture is not clear for the reader. How does the temporal and spatial blocks work? They seem to work in different dimensions of the signals. Even though the authors explain the details in the text I believe an additional illustration in each block (maybe in Appendix) might be helpful to reproduce the method in the paper for further research. 4- How does the specifics of the network architecture influence the performance? Why do the authors reuse the input of a temporal block to its output and how does this influence the performance? 5- How is the complex component of the signal concatenated into a channel ? Does the order of concatenation influence the results? Did the authors considered to utilize complex valued networks for this task? 6- The quantitative results are yielded using multiple segmentation masks due to MR physics related concerns. Are the results on Table 1 heavily dependent on use of these masks? Are the results on the entire parametric maps in line with the current results? 7- What is the number of parameters required for each method in Table 1? The reason for high performance of the proposed method can be explained with the required number of parameters to train the method. Please elaborate on this. 8-The lack of scalability and the requirement of computational time is highlighted in the introduction and abstract. However, no quantitative comparisons are provided. I believe the computational time can be added for each method in Table 1. Minor suggestions a- Some recent work on using the complex-valued neural networks (Virtue Patrick et al., arxiv), geometry of deep learning (Golbabaee et al., arxiv)and recurrent neural networks (Oksuz et al.,arxiv) for MRF dictionary matching can be mentioned in the literature review with their strengths and weakneses. b- Please explain (a.u.) term in Fig.2. c- Quantitative results can be mentioned in the abstract. """,3,1 midl19_59_3,"""his paper addresses the issue of MR map reconstruction for magnetic resonance fingerprinting. The authors proposed a CNN based strategy to reduce the time required for reconstruction of such images. The paper includes an extended state of the art review. One of the open issues highlighted is the limited amount of data available for this type of studies. In this study a dataset of 95 scans is included. The proposed architecture is compared with two other deep learning architectures. Was experimentation performed before the proposed CNN architecture was defined? Were other optimizers and optimizer hyperparameters evaluated? Measures utilized for evaluation. It would be beneficial to see what are the effects of such methods in the texture of the reconstructed maps. For instance homogeneity.""",3,1